sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensors, Signal and Image Processing in Biomedicine and Assisted Living"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (31 March 2020).

Special Issue Editor

Prof. Dr. Dimitris Iakovidis
Website
Guest Editor
Department of Computer Science and Biomedical Informatics, University of Thessaly, Papasiopoulou 2-4, 35131 Lamia, Greece
Interests: Signal/image processing and analysis; Pattern recognition, data mining & machine learning; Software engineering; Bio-inspired algorithms & fuzzy systems; Decision support & cognitive systems; Challenging applications including but not limited to clinical informatics and biomedical engineering

Special Issue Information

Dear Colleagues,

Sensor technologies are crucial in biomedicine, as the biomedical devices used for screening and/or diagnosis rely on their efficiency and effectiveness. Further enhancement of the sensor signals acquired, such as the noise reduction in the one-dimensional electroencephalographic (EEG) signals or the color correction in the endoscopic images, and their analysis by computer-based medical systems, has been enabled by artificial intelligence, which promises enhanced diagnostic yield and productivity for sustainable health systems. Furthermore, today, smart sensor systems incorporating advanced signal processing and analysis techniques are entering our life through smartphones and other wearable devices to monitor our health status and help us maintain a healthy lifestyle. The impact of such technologies can be even more significant for the elderly or people with disabilities, such as the visually impaired.

In this context, this Special Issue welcomes original contributions that focus on novel sensor technologies, signal, image, and video processing/analysis methodologies. It also welcomes review articles on challenging topics and emerging technologies.

This Special Issue is organized in the context of the project ENORASI (Intelligent Audiovisual System Enhancing Cultural Experience and Accessibility), co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH–CREATE–INNOVATE (project code: T1EDK-02070).

Prof. Dr. Dimitris Iakovidis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords


  • Biomedical systems
  • Assistive systems
  • Multisensor systems
  • Biomedical sensors
  • Sensor networks
  • Internet of Things (IoT)
  • Machine learning
  • Decision making
  • Uncertainty-aware systems
  • Segmentation
  • Detection
  • Classification
  • Modeling and simulation
  • Video analysis
  • Multimodal signal fusion
  • Coding and compression
  • Summarization
  • Transmission
  • Quality enhancement
  • Quality assessment

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Uncertainty-Aware Visual Perception System for Outdoor Navigation of the Visually Challenged
Sensors 2020, 20(8), 2385; https://doi.org/10.3390/s20082385 - 22 Apr 2020
Cited by 1
Abstract
Every day, visually challenged people (VCP) face mobility restrictions and accessibility limitations. A short walk to a nearby destination, which for other individuals is taken for granted, becomes a challenge. To tackle this problem, we propose a novel visual perception system for outdoor [...] Read more.
Every day, visually challenged people (VCP) face mobility restrictions and accessibility limitations. A short walk to a nearby destination, which for other individuals is taken for granted, becomes a challenge. To tackle this problem, we propose a novel visual perception system for outdoor navigation that can be evolved into an everyday visual aid for VCP. The proposed methodology is integrated in a wearable visual perception system (VPS). The proposed approach efficiently incorporates deep learning, object recognition models, along with an obstacle detection methodology based on human eye fixation prediction using Generative Adversarial Networks. An uncertainty-aware modeling of the obstacle risk assessment and spatial localization has been employed, following a fuzzy logic approach, for robust obstacle detection. The above combination can translate the position and the type of detected obstacles into descriptive linguistic expressions, allowing the users to easily understand their location in the environment and avoid them. The performance and capabilities of the proposed method are investigated in the context of safe navigation of VCP in outdoor environments of cultural interest through obstacle recognition and detection. Additionally, a comparison between the proposed system and relevant state-of-the-art systems for the safe navigation of VCP, focused on design and user-requirements satisfaction, is performed. Full article
Show Figures

Figure 1

Open AccessArticle
Contactless Real-Time Heartbeat Detection via 24 GHz Continuous-Wave Doppler Radar Using Artificial Neural Networks
Sensors 2020, 20(8), 2351; https://doi.org/10.3390/s20082351 - 21 Apr 2020
Abstract
The measurement of human vital signs is a highly important task in a variety of environments and applications. Most notably, the electrocardiogram (ECG) is a versatile signal that could indicate various physical and psychological conditions, from signs of life to complex mental states. [...] Read more.
The measurement of human vital signs is a highly important task in a variety of environments and applications. Most notably, the electrocardiogram (ECG) is a versatile signal that could indicate various physical and psychological conditions, from signs of life to complex mental states. The measurement of the ECG relies on electrodes attached to the skin to acquire the electrical activity of the heart, which imposes certain limitations. Recently, due to the advancement of wireless technology, it has become possible to pick up heart activity in a contactless manner. Among the possible ways to wirelessly obtain information related to heart activity, methods based on mm-wave radars proved to be the most accurate in detecting the small mechanical oscillations of the human chest resulting from heartbeats. In this paper, we presented a method based on a continuous-wave Doppler radar coupled with an artificial neural network (ANN) to detect heartbeats as individual events. To keep the method computationally simple, the ANN took the raw radar signal as input, while the output was minimally processed, ensuring low latency operation (<1 s). The performance of the proposed method was evaluated with respect to an ECG reference (“ground truth”) in an experiment involving 21 healthy volunteers, who were sitting on a cushioned seat and were refrained from making excessive body movements. The results indicated that the presented approach is viable for the fast detection of individual heartbeats without heavy signal preprocessing. Full article
Show Figures

Figure 1

Open AccessArticle
Contactless Vital Signs Measurement System Using RGB-Thermal Image Sensors and Its Clinical Screening Test on Patients with Seasonal Influenza
Sensors 2020, 20(8), 2171; https://doi.org/10.3390/s20082171 - 13 Apr 2020
Abstract
Background: In the last two decades, infrared thermography (IRT) has been applied in quarantine stations for the screening of patients with suspected infectious disease. However, the fever-based screening procedure employing IRT suffers from low sensitivity, because monitoring body temperature alone is insufficient for [...] Read more.
Background: In the last two decades, infrared thermography (IRT) has been applied in quarantine stations for the screening of patients with suspected infectious disease. However, the fever-based screening procedure employing IRT suffers from low sensitivity, because monitoring body temperature alone is insufficient for detecting infected patients. To overcome the drawbacks of fever-based screening, this study aims to develop and evaluate a multiple vital sign (i.e., body temperature, heart rate and respiration rate) measurement system using RGB-thermal image sensors. Methods: The RGB camera measures blood volume pulse (BVP) through variations in the light absorption from human facial areas. IRT is used to estimate the respiration rate by measuring the change in temperature near the nostrils or mouth accompanying respiration. To enable a stable and reliable system, the following image and signal processing methods were proposed and implemented: (1) an RGB-thermal image fusion approach to achieve highly reliable facial region-of-interest tracking, (2) a heart rate estimation method including a tapered window for reducing noise caused by the face tracker, reconstruction of a BVP signal with three RGB channels to optimize a linear function, thereby improving the signal-to-noise ratio and multiple signal classification (MUSIC) algorithm for estimating the pseudo-spectrum from limited time-domain BVP signals within 15 s and (3) a respiration rate estimation method implementing nasal or oral breathing signal selection based on signal quality index for stable measurement and MUSIC algorithm for rapid measurement. We tested the system on 22 healthy subjects and 28 patients with seasonal influenza, using the support vector machine (SVM) classification method. Results: The body temperature, heart rate and respiration rate measured in a non-contact manner were highly similarity to those measured via contact-type reference devices (i.e., thermometer, ECG and respiration belt), with Pearson correlation coefficients of 0.71, 0.87 and 0.87, respectively. Moreover, the optimized SVM model with three vital signs yielded sensitivity and specificity values of 85.7% and 90.1%, respectively. Conclusion: For contactless vital sign measurement, the system achieved a performance similar to that of the reference devices. The multiple vital sign-based screening achieved higher sensitivity than fever-based screening. Thus, this system represents a promising alternative for further quarantine procedures to prevent the spread of infectious diseases. Full article
Show Figures

Figure 1

Open AccessArticle
Hyperspectral Imaging for the Detection of Glioblastoma Tumor Cells in H&E Slides Using Convolutional Neural Networks
Sensors 2020, 20(7), 1911; https://doi.org/10.3390/s20071911 - 30 Mar 2020
Abstract
Hyperspectral imaging (HSI) technology has demonstrated potential to provide useful information about the chemical composition of tissue and its morphological features in a single image modality. Deep learning (DL) techniques have demonstrated the ability of automatic feature extraction from data for a successful [...] Read more.
Hyperspectral imaging (HSI) technology has demonstrated potential to provide useful information about the chemical composition of tissue and its morphological features in a single image modality. Deep learning (DL) techniques have demonstrated the ability of automatic feature extraction from data for a successful classification. In this study, we exploit HSI and DL for the automatic differentiation of glioblastoma (GB) and non-tumor tissue on hematoxylin and eosin (H&E) stained histological slides of human brain tissue. GB detection is a challenging application, showing high heterogeneity in the cellular morphology across different patients. We employed an HSI microscope, with a spectral range from 400 to 1000 nm, to collect 517 HS cubes from 13 GB patients using 20× magnification. Using a convolutional neural network (CNN), we were able to automatically detect GB within the pathological slides, achieving average sensitivity and specificity values of 88% and 77%, respectively, representing an improvement of 7% and 8% respectively, as compared to the results obtained using RGB (red, green, and blue) images. This study demonstrates that the combination of hyperspectral microscopic imaging and deep learning is a promising tool for future computational pathologies. Full article
Show Figures

Figure 1

Open AccessArticle
Sleep in the Natural Environment: A Pilot Study
Sensors 2020, 20(5), 1378; https://doi.org/10.3390/s20051378 - 03 Mar 2020
Abstract
Sleep quality has been directly linked to cognitive function, quality of life, and a variety of serious diseases across many clinical domains. Standard methods for assessing sleep involve overnight studies in hospital settings, which are uncomfortable, expensive, not representative of real sleep, and [...] Read more.
Sleep quality has been directly linked to cognitive function, quality of life, and a variety of serious diseases across many clinical domains. Standard methods for assessing sleep involve overnight studies in hospital settings, which are uncomfortable, expensive, not representative of real sleep, and difficult to conduct on a large scale. Recently, numerous commercial digital devices have been developed that record physiological data, such as movement, heart rate, and respiratory rate, which can act as a proxy for sleep quality in lieu of standard electroencephalogram recording equipment. The sleep-related output metrics from these devices include sleep staging and total sleep duration and are derived via proprietary algorithms that utilize a variety of these physiological recordings. Each device company makes different claims of accuracy and measures different features of sleep quality, and it is still unknown how well these devices correlate with one another and perform in a research setting. In this pilot study of 21 participants, we investigated whether sleep metric outputs from self-reported sleep metrics (SRSMs) and four sensors, specifically Fitbit Surge (a smart watch), Withings Aura (a sensor pad that is placed under a mattress), Hexoskin (a smart shirt), and Oura Ring (a smart ring), were related to known cognitive and psychological metrics, including the n-back test and Pittsburgh Sleep Quality Index (PSQI). We analyzed correlation between multiple device-related sleep metrics. Furthermore, we investigated relationships between these sleep metrics and cognitive scores across different timepoints and SRSM through univariate linear regressions. We found that correlations for sleep metrics between the devices across the sleep cycle were almost uniformly low, but still significant (p < 0.05). For cognitive scores, we found the Withings latency was statistically significant for afternoon and evening timepoints at p = 0.016 and p = 0.013. We did not find any significant associations between SRSMs and PSQI or cognitive scores. Additionally, Oura Ring’s total sleep duration and efficiency in relation to the PSQI measure was statistically significant at p = 0.004 and p = 0.033, respectively. These findings can hopefully be used to guide future sensor-based sleep research. Full article
Show Figures

Graphical abstract

Open AccessArticle
The Rehapiano—Detecting, Measuring, and Analyzing Action Tremor Using Strain Gauges
Sensors 2020, 20(3), 663; https://doi.org/10.3390/s20030663 - 24 Jan 2020
Abstract
We have developed a device, the Rehapiano, for the fast and quantitative assessment of action tremor. It uses strain gauges to measure force exerted by individual fingers. This article verifies the device’s capability to measure and monitor the development of upper limb tremor. [...] Read more.
We have developed a device, the Rehapiano, for the fast and quantitative assessment of action tremor. It uses strain gauges to measure force exerted by individual fingers. This article verifies the device’s capability to measure and monitor the development of upper limb tremor. The Rehapiano uses a precision, 24-bit, analog-to-digital converter and an Arduino microcomputer to transfer raw data via a USB interface to a computer for processing, database storage, and evaluation. First, our experiments validated the device by measuring simulated tremors with known frequencies. Second, we created a measurement protocol, which we used to measure and compare healthy patients and patients with Parkinson’s disease. Finally, we evaluated the repeatability of a quantitative assessment. We verified our hypothesis that the Rehapiano is able to detect force changes, and our experimental results confirmed that our system is capable of measuring action tremor. The Rehapiano is also sensitive enough to enable the quantification of Parkinsonian tremors. Full article
Show Figures

Figure 1

Open AccessArticle
Adaptive Sampling of the Electrocardiogram Based on Generalized Perceptual Features
Sensors 2020, 20(2), 373; https://doi.org/10.3390/s20020373 - 09 Jan 2020
Abstract
A non-uniform distribution of diagnostic information in the electrocardiogram (ECG) has been commonly accepted and is the background to several compression, denoising and watermarking methods. Gaze tracking is a widely recognized method for identification of an observer’s preferences and interest areas. The statistics [...] Read more.
A non-uniform distribution of diagnostic information in the electrocardiogram (ECG) has been commonly accepted and is the background to several compression, denoising and watermarking methods. Gaze tracking is a widely recognized method for identification of an observer’s preferences and interest areas. The statistics of experts’ scanpaths were found to be a convenient quantitative estimate of medical information density for each particular component (i.e., wave) of the ECG record. In this paper we propose the application of generalized perceptual features to control the adaptive sampling of a digital ECG. Firstly, based on temporal distribution of the information density, local ECG bandwidth is estimated and projected to the actual positions of components in heartbeat representation. Next, the local sampling frequency is calculated pointwise and the ECG is adaptively low-pass filtered in all simultaneous channels. Finally, sample values are interpolated at new time positions forming a non-uniform time series. In evaluation of perceptual sampling, an inverse transform was used for the reconstruction of regularly sampled ECG with a percent root-mean-square difference (PRD) error of 3–5% (for compression ratios 3.0–4.7, respectively). Nevertheless, tests performed with the use of the CSE Database show good reproducibility of ECG diagnostic features, within the IEC 60601-2-25:2015 requirements, thanks to the occurrence of distortions in less relevant parts of the cardiac cycle. Full article
Show Figures

Graphical abstract

Open AccessArticle
Most Relevant Spectral Bands Identification for Brain Cancer Detection Using Hyperspectral Imaging
Sensors 2019, 19(24), 5481; https://doi.org/10.3390/s19245481 - 12 Dec 2019
Cited by 3
Abstract
Hyperspectral imaging (HSI) is a non-ionizing and non-contact imaging technique capable of obtaining more information than conventional RGB (red green blue) imaging. In the medical field, HSI has commonly been investigated due to its great potential for diagnostic and surgical guidance purposes. However, [...] Read more.
Hyperspectral imaging (HSI) is a non-ionizing and non-contact imaging technique capable of obtaining more information than conventional RGB (red green blue) imaging. In the medical field, HSI has commonly been investigated due to its great potential for diagnostic and surgical guidance purposes. However, the large amount of information provided by HSI normally contains redundant or non-relevant information, and it is extremely important to identify the most relevant wavelengths for a certain application in order to improve the accuracy of the predictions and reduce the execution time of the classification algorithm. Additionally, some wavelengths can contain noise and removing such bands can improve the classification stage. The work presented in this paper aims to identify such relevant spectral ranges in the visual-and-near-infrared (VNIR) region for an accurate detection of brain cancer using in vivo hyperspectral images. A methodology based on optimization algorithms has been proposed for this task, identifying the relevant wavelengths to achieve the best accuracy in the classification results obtained by a supervised classifier (support vector machines), and employing the lowest possible number of spectral bands. The results demonstrate that the proposed methodology based on the genetic algorithm optimization slightly improves the accuracy of the tumor identification in ~5%, using only 48 bands, with respect to the reference results obtained with 128 bands, offering the possibility of developing customized acquisition sensors that could provide real-time HS imaging. The most relevant spectral ranges found comprise between 440.5–465.96 nm, 498.71–509.62 nm, 556.91–575.1 nm, 593.29–615.12 nm, 636.94–666.05 nm, 698.79–731.53 nm and 884.32–902.51 nm. Full article
Show Figures

Figure 1

Open AccessArticle
A New Approach for Motor Imagery Classification Based on Sorted Blind Source Separation, Continuous Wavelet Transform, and Convolutional Neural Network
Sensors 2019, 19(20), 4541; https://doi.org/10.3390/s19204541 - 18 Oct 2019
Cited by 6
Abstract
Brain-Computer Interfaces (BCI) are systems that allow the interaction of people and devices on the grounds of brain activity. The noninvasive and most viable way to obtain such information is by using electroencephalography (EEG). However, these signals have a low signal-to-noise ratio, as [...] Read more.
Brain-Computer Interfaces (BCI) are systems that allow the interaction of people and devices on the grounds of brain activity. The noninvasive and most viable way to obtain such information is by using electroencephalography (EEG). However, these signals have a low signal-to-noise ratio, as well as a low spatial resolution. This work proposes a new method built from the combination of a Blind Source Separation (BSS) to obtain estimated independent components, a 2D representation of these component signals using the Continuous Wavelet Transform (CWT), and a classification stage using a Convolutional Neural Network (CNN) approach. A criterion based on the spectral correlation with a Movement Related Independent Component (MRIC) is used to sort the estimated sources by BSS, thus reducing the spatial variance. The experimental results of 94.66% using a k-fold cross validation are competitive with techniques recently reported in the state-of-the-art. Full article
Show Figures

Figure 1

Open AccessArticle
Target-Specific Action Classification for Automated Assessment of Human Motor Behavior from Video
Sensors 2019, 19(19), 4266; https://doi.org/10.3390/s19194266 - 01 Oct 2019
Cited by 1
Abstract
Objective monitoring and assessment of human motor behavior can improve the diagnosis and management of several medical conditions. Over the past decade, significant advances have been made in the use of wearable technology for continuously monitoring human motor behavior in free-living conditions. However, [...] Read more.
Objective monitoring and assessment of human motor behavior can improve the diagnosis and management of several medical conditions. Over the past decade, significant advances have been made in the use of wearable technology for continuously monitoring human motor behavior in free-living conditions. However, wearable technology remains ill-suited for applications which require monitoring and interpretation of complex motor behaviors (e.g., involving interactions with the environment). Recent advances in computer vision and deep learning have opened up new possibilities for extracting information from video recordings. In this paper, we present a hierarchical vision-based behavior phenotyping method for classification of basic human actions in video recordings performed using a single RGB camera. Our method addresses challenges associated with tracking multiple human actors and classification of actions in videos recorded in changing environments with different fields of view. We implement a cascaded pose tracker that uses temporal relationships between detections for short-term tracking and appearance based tracklet fusion for long-term tracking. Furthermore, for action classification, we use pose evolution maps derived from the cascaded pose tracker as low-dimensional and interpretable representations of the movement sequences for training a convolutional neural network. The cascaded pose tracker achieves an average accuracy of 88% in tracking the target human actor in our video recordings, and overall system achieves average test accuracy of 84% for target-specific action classification in untrimmed video recordings. Full article
Show Figures

Figure 1

Open AccessArticle
Continuous Distant Measurement of the User’s Heart Rate in Human-Computer Interaction Applications
Sensors 2019, 19(19), 4205; https://doi.org/10.3390/s19194205 - 27 Sep 2019
Abstract
In real world scenarios, the task of estimating heart rate (HR) using video plethysmography (VPG) methods is difficult because many factors could contaminate the pulse signal (i.e., a subjects’ movement, illumination changes). This article presents the evaluation of a VPG system designed for [...] Read more.
In real world scenarios, the task of estimating heart rate (HR) using video plethysmography (VPG) methods is difficult because many factors could contaminate the pulse signal (i.e., a subjects’ movement, illumination changes). This article presents the evaluation of a VPG system designed for continuous monitoring of the user’s heart rate during typical human-computer interaction scenarios. The impact of human activities while working at the computer (i.e., reading and writing text, playing a game) on the accuracy of HR VPG measurements was examined. Three commonly used signal extraction methods were evaluated: green (G), green-red difference (GRD), blind source separation (ICA). A new method based on an excess green (ExG) image representation was proposed. Three algorithms for estimating pulse rate were used: power spectral density (PSD), autoregressive modeling (AR) and time domain analysis (TIME). In summary, depending on the scenario being studied, different combinations of signal extraction methods and the pulse estimation algorithm ensure optimal heart rate detection results. The best results were obtained for the ICA method: average RMSE = 6.1 bpm (beats per minute). The proposed ExG signal representation outperforms other methods except ICA (RMSE = 11.2 bpm compared to 14.4 bpm for G and 13.0 bmp for GRD). ExG also is the best method in terms of proposed success rate metric (sRate). Full article
Show Figures

Figure 1

Open AccessArticle
Improving Discrimination in Color Vision Deficiency by Image Re-Coloring
Sensors 2019, 19(10), 2250; https://doi.org/10.3390/s19102250 - 15 May 2019
Cited by 4
Abstract
People with color vision deficiency (CVD) cannot observe the colorful world due to the damage of color reception nerves. In this work, we present an image enhancement approach to assist colorblind people to identify the colors they are not able to distinguish naturally. [...] Read more.
People with color vision deficiency (CVD) cannot observe the colorful world due to the damage of color reception nerves. In this work, we present an image enhancement approach to assist colorblind people to identify the colors they are not able to distinguish naturally. An image re-coloring algorithm based on eigenvector processing is proposed for robust color separation under color deficiency transformation. It is shown that the eigenvector of color vision deficiency is distorted by an angle in the λ , Y-B, R-G color space. The experimental results show that our approach is useful for the recognition and separation of the CVD confusing colors in natural scene images. Compared to the existing techniques, our results of natural images with CVD simulation work very well in terms of RMS, HDR-VDP-2 and an IRB-approved human test. Both the objective comparison with previous works and the subjective evaluation on human tests validate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

Open AccessArticle
Low-Complexity and Hardware-Friendly H.265/HEVC Encoder for Vehicular Ad-Hoc Networks
Sensors 2019, 19(8), 1927; https://doi.org/10.3390/s19081927 - 24 Apr 2019
Cited by 3
Abstract
Real-time video streaming over vehicular ad-hoc networks (VANETs) has been considered as a critical challenge for road safety applications. The purpose of this paper is to reduce the computation complexity of high efficiency video coding (HEVC) encoder for VANETs. Based on a novel [...] Read more.
Real-time video streaming over vehicular ad-hoc networks (VANETs) has been considered as a critical challenge for road safety applications. The purpose of this paper is to reduce the computation complexity of high efficiency video coding (HEVC) encoder for VANETs. Based on a novel spatiotemporal neighborhood set, firstly the coding tree unit depth decision algorithm is presented by controlling the depth search range. Secondly, a Bayesian classifier is used for the prediction unit decision for inter-prediction, and prior probability value is calculated by Gibbs Random Field model. Simulation results show that the overall algorithm can significantly reduce encoding time with a reasonably low loss in encoding efficiency. Compared to HEVC reference software HM16.0, the encoding time is reduced by up to 63.96%, while the Bjontegaard delta bit-rate is increased by only 0.76–0.80% on average. Moreover, the proposed HEVC encoder is low-complexity and hardware-friendly for video codecs that reside on mobile vehicles for VANETs. Full article
Show Figures

Figure 1

Back to TopTop