sensors-logo

Journal Browser

Journal Browser

Image/Video Coding and Processing Techniques for Intelligent Sensor Nodes: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 9 October 2025 | Viewed by 4073

Special Issue Editors


E-Mail Website
Guest Editor
Graduate School of Science and Engineering, Hosei University, Tokyo 102-8160, Japan
Interests: image sensors; computer vision; image processing; video coding
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Graduate School of Information Science and Technology, Osaka University, Osaka 565-0871, Japan
Interests: image/video processing for embedded system; design methodology for embedded systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
Interests: computational photography; image/video processing and coding
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

There is an increasing interest in the development of intelligent sensor nodes that enable intelligent processing for Internet of Things (IoT) surveillance, remote sensing, and smart city applications. The data are processed onboard through embedded signal processing and machine learning-based analysis algorithms. These machine learning-driven sensors can transmit key information instead of raw sensing data, thereby lowering the data volume traveling through a network.

In recent years, there has been a preference for specifically designed image and video codecs because of the explosion of image and video data in IoT systems. Indeed, this is due to a focus on reducing data burden and improving reconstructed image quality, image/video coding and processing techniques for low-cost implementations, reducing power consumption, and increasing battery lifetimes that can cope with the design requirements of sensor nodes. Moreover, intelligent sensors can make the jump from traditional intuition-driven sensors to machine learning algorithms, thus delivering high-resolution images and videos for the 5G revolution.

In line with the mission of Sensors, the organizers of this Special Issue endeavor to demonstrate the most recent advancements in image/video coding and processing techniques for intelligent sensor nodes from both academic and industrial perspectives.

Dr. Jinjia Zhou
Dr. Ittetsu Taniguchi
Prof. Dr. Xin Jin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image/video coding
  • image sensing
  • image/video processing
  • wireless communication
  • wireless sensor network
  • computational imaging

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 4426 KiB  
Article
High-Radix Taylor-Optimized Tone Mapping Processor for Adaptive 4K HDR Video at 30 FPS
by Xianglong Wang, Zhiyong Lai, Lei Chen and Fengwei An
Sensors 2025, 25(13), 3887; https://doi.org/10.3390/s25133887 - 22 Jun 2025
Viewed by 194
Abstract
High Dynamic Range (HDR) imaging is capable of capturing vivid and lifelike visual effects, which are crucial for fields such as computer vision, photography, and medical imaging. However, real-time processing of HDR content remains challenging due to the computational complexity of tone mapping [...] Read more.
High Dynamic Range (HDR) imaging is capable of capturing vivid and lifelike visual effects, which are crucial for fields such as computer vision, photography, and medical imaging. However, real-time processing of HDR content remains challenging due to the computational complexity of tone mapping algorithms and the inherent limitations of Low Dynamic Range (LDR) capture systems. This paper presents an adaptive HDR tone mapping processor that achieves high computational efficiency and robust image quality under varying exposure conditions. By integrating an exposure-adaptive factor into a bilateral filtering framework, we dynamically optimize parameters to achieve consistent performance across fluctuating illumination conditions. Further, we introduce a high-radix Taylor expansion technique to accelerate floating-point logarithmic and exponential operations, significantly reducing resource overhead while maintaining precision. The proposed architecture, implemented on a Xilinx XCVU9P FPGA, operates at 250 MHz and processes 4K video at 30 frames per second (FPS), outperforming state-of-the-art designs in both throughput and hardware efficiency. Experimental results demonstrate superior image fidelity with an average Tone Mapping Quality Index (TMQI): 0.9314 and 43% fewer logic resources compared to existing solutions, enabling real-time HDR processing for high-resolution applications. Full article
Show Figures

Figure 1

31 pages, 4311 KiB  
Article
DRGNet: Enhanced VVC Reconstructed Frames Using Dual-Path Residual Gating for High-Resolution Video
by Zezhen Gai, Tanni Das and Kiho Choi
Sensors 2025, 25(12), 3744; https://doi.org/10.3390/s25123744 - 15 Jun 2025
Viewed by 304
Abstract
In recent years, with the rapid development of the Internet and mobile devices, the high-resolution video industry has ushered in a booming golden era, making video content the primary driver of Internet traffic. This trend has spurred continuous innovation in efficient video coding [...] Read more.
In recent years, with the rapid development of the Internet and mobile devices, the high-resolution video industry has ushered in a booming golden era, making video content the primary driver of Internet traffic. This trend has spurred continuous innovation in efficient video coding technologies, such as Advanced Video Coding/H.264 (AVC), High Efficiency Video Coding/H.265 (HEVC), and Versatile Video Coding/H.266 (VVC), which significantly improves compression efficiency while maintaining high video quality. However, during the encoding process, compression artifacts and the loss of visual details remain unavoidable challenges, particularly in high-resolution video processing, where the massive amount of image data tends to introduce more artifacts and noise, ultimately affecting the user’s viewing experience. Therefore, effectively reducing artifacts, removing noise, and minimizing detail loss have become critical issues in enhancing video quality. To address these challenges, this paper proposes a post-processing method based on Convolutional Neural Network (CNN) that improves the quality of VVC-reconstructed frames through deep feature extraction and fusion. The proposed method is built upon a high-resolution dual-path residual gating system, which integrates deep features from different convolutional layers and introduces convolutional blocks equipped with gating mechanisms. By ingeniously combining gating operations with residual connections, the proposed approach ensures smooth gradient flow while enhancing feature selection capabilities. It selectively preserves critical information while effectively removing artifacts. Furthermore, the introduction of residual connections reinforces the retention of original details, achieving high-quality image restoration. Under the same bitrate conditions, the proposed method significantly improves the Peak Signal-to-Noise Ratio (PSNR) value, thereby optimizing video coding quality and providing users with a clearer and more detailed visual experience. Extensive experimental results demonstrate that the proposed method achieves outstanding performance across Random Access (RA), Low Delay B-frame (LDB), and All Intra (AI) configurations, achieving BD-Rate improvements of 6.1%, 7.36%, and 7.1% for the luma component, respectively, due to the remarkable PSNR enhancement. Full article
Show Figures

Figure 1

22 pages, 798 KiB  
Article
A Lightweight Double Compression Detector for HEIF Images Based on Encoding Information
by Yoshihisa Furushita, Marco Fontani, Stefano Bianchi, Alessandro Piva and Giovanni Ramponi
Sensors 2024, 24(16), 5103; https://doi.org/10.3390/s24165103 - 6 Aug 2024
Viewed by 1236
Abstract
Extensive research has been conducted in image forensics on the analysis of double-compressed images, particularly in the widely adopted JPEG format. However, there is a lack of methods to detect double compression in the HEIF format, which has recently gained popularity since it [...] Read more.
Extensive research has been conducted in image forensics on the analysis of double-compressed images, particularly in the widely adopted JPEG format. However, there is a lack of methods to detect double compression in the HEIF format, which has recently gained popularity since it allows for reduced file size while maintaining image quality. Traditional JPEG-based techniques do not apply to HEIF due to its distinct encoding algorithms. We previously proposed a method to detect double compression in HEIF images based on Farid’s work on coding ghosts in JPEG images. However, this method was limited to scenarios where the quality parameter used for the first encoding was larger than for the second encoding. In this study, we propose a lightweight image classifier to extend the existing model, enabling the identification of double-compressed images without heavily depending on the input image’s quantization history. This extended model outperforms the previous approach and, despite its lightness, demonstrates excellent detection accuracy. Full article
Show Figures

Figure 1

22 pages, 18896 KiB  
Article
Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing
by Luyang Liu, Hiroki Nishikawa, Jinjia Zhou, Ittetsu Taniguchi and Takao Onoye
Sensors 2024, 24(13), 4348; https://doi.org/10.3390/s24134348 - 4 Jul 2024
Cited by 2 | Viewed by 1708
Abstract
Compressive sensing (CS) is recognized for its adeptness at compressing signals, making it a pivotal technology in the context of sensor data acquisition. With the proliferation of image data in Internet of Things (IoT) systems, CS is expected to reduce the transmission cost [...] Read more.
Compressive sensing (CS) is recognized for its adeptness at compressing signals, making it a pivotal technology in the context of sensor data acquisition. With the proliferation of image data in Internet of Things (IoT) systems, CS is expected to reduce the transmission cost of signals captured by various sensor devices. However, the quality of CS-reconstructed signals inevitably degrades as the sampling rate decreases, which poses a challenge in terms of the inference accuracy in downstream computer vision (CV) tasks. This limitation imposes an obstacle to the real-world application of existing CS techniques, especially for reducing transmission costs in sensor-rich environments. In response to this challenge, this paper contributes a CV-oriented adaptive CS framework based on saliency detection to the field of sensing technology that enables sensor systems to intelligently prioritize and transmit the most relevant data. Unlike existing CS techniques, the proposal prioritizes the accuracy of reconstructed images for CV purposes, not only for visual quality. The primary objective of this proposal is to enhance the preservation of information critical for CV tasks while optimizing the utilization of sensor data. This work conducts experiments on various realistic scenario datasets collected by real sensor devices. Experimental results demonstrate superior performance compared to existing CS sampling techniques across the STL10, Intel, and Imagenette datasets for classification and KITTI for object detection. Compared with the baseline uniform sampling technique, the average classification accuracy shows a maximum improvement of 26.23%, 11.69%, and 18.25%, respectively, at specific sampling rates. In addition, even at very low sampling rates, the proposal is demonstrated to be robust in terms of classification and detection as compared to state-of-the-art CS techniques. This ensures essential information for CV tasks is retained, improving the efficacy of sensor-based data acquisition systems. Full article
Show Figures

Figure 1

Back to TopTop