sensors-logo

Journal Browser

Journal Browser

Special Issue "Advances in Perceptual Quality Assessment of User Generated Contents"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 20 February 2023 | Viewed by 2381

Special Issue Editors

Prof. Dr. Guangtao Zhai
E-Mail Website
Guest Editor
Institute of Image Communication and Information Processing, ShanghaiJiao Tong University, Shanghai 200240, China
Interests: image processing; visual quality assessment; computer vision; human vision
Dr. Xiongkuo Min
E-Mail Website
Guest Editor
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
Interests: image quality assessment, video quality assessment, quality of experience, saliency, multimedia signal processing
Dr. Menghan Hu
E-Mail Website
Guest Editor
Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, China
Interests: image quality signal processing, medical imaging, hyperspectral imaging, image processing, agricultural engineering
Dr. Wei Zhou
E-Mail Website
Guest Editor
Image & Vision Computing Lab (IVC),Univerisity of Waterloo,Waterloo ON N2L 3G1, Canada
Interests: image and video processing, multimedia computing, computational vision

Special Issue Information

Dear Colleagues,

Due to the rapid development of mobile devices and wireless networks in recent years, creating, watching and sharing user-generated content (UGC) through various applications such as social media has become a popular daily activity for the general public. User-generated content in these applications exhibits markedly different characteristics than conventional, professionally generated content (PGC). Unlike professionally generated content, user-generated content is generally captured in the wild by ordinary people using diverse capture devices, and may suffer from complex real-world distortions, such as overexposure, underexposure, camera shakiness, etc., which also pose challenges for quality assessment. On one hand, an effective quality assessment (QA) model to evaluate the perceptual quality of user-generated contents can help the service provider recommend high-quality contents to users, and on the other hand can guide the development of more effective content processing algorithms.

Although subjective and objective quality assessments have been carried out in this area for many years, most of them focused on professionally generated content, without considering the specific characteristics of user-generated content. This Special Issue seeks original submissions and the latest technologies concerning the perceptual quality assessment of user-generated content, including—but not limited to—image/video/audio quality assessment databases/metrics for user-generated content, perceptual processing, compression, enhancement, and distribution of user-generated contents. Submissions pertaining to related practical applications and model development for user-generated content are also welcome.

Prof. Dr. Guangtao Zhai
Dr. Xiongkuo Min
Dr. Menghan Hu
Dr. Wei Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • user-generated content
  • perceptual quality
  • image/video/audio quality assessment
  • image analysis and image processing
  • video/audio signal processing
  • cameras
  • user-generated content based on a sensing system

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Client-Oriented Blind Quality Metric for High Dynamic Range Stereoscopic Omnidirectional Vision Systems
Sensors 2022, 22(21), 8513; https://doi.org/10.3390/s22218513 - 04 Nov 2022
Viewed by 383
Abstract
A high dynamic range (HDR) stereoscopic omnidirectional vision system can provide users with more realistic binocular and immersive perception, where the HDR stereoscopic omnidirectional image (HSOI) suffers distortions during its encoding and visualization, making its quality evaluation more challenging. To solve the problem, [...] Read more.
A high dynamic range (HDR) stereoscopic omnidirectional vision system can provide users with more realistic binocular and immersive perception, where the HDR stereoscopic omnidirectional image (HSOI) suffers distortions during its encoding and visualization, making its quality evaluation more challenging. To solve the problem, this paper proposes a client-oriented blind HSOI quality metric based on visual perception. The proposed metric mainly consists of a monocular perception module (MPM) and binocular perception module (BPM), which combine monocular/binocular, omnidirectional and HDR/tone-mapping perception. The MPM extracts features from three aspects: global color distortion, symmetric/asymmetric distortion and scene distortion. In the BPM, the binocular fusion map and binocular difference map are generated by joint image filtering. Then, brightness segmentation is performed on the binocular fusion image, and distinctive features are extracted on the segmented high/low/middle brightness regions. For the binocular difference map, natural scene statistical features are extracted by multi-coefficient derivative maps. Finally, feature screening is used to remove the redundancy between the extracted features. Experimental results on the HSOID database show that the proposed metric is generally better than the representative quality metric, and is more consistent with the subjective perception. Full article
(This article belongs to the Special Issue Advances in Perceptual Quality Assessment of User Generated Contents)
Show Figures

Figure 1

Article
FDMLNet: A Frequency-Division and Multiscale Learning Network for Enhancing Low-Light Image
Sensors 2022, 22(21), 8244; https://doi.org/10.3390/s22218244 - 27 Oct 2022
Viewed by 394
Abstract
Low-illumination images exhibit low brightness, blurry details, and color casts, which present us an unnatural visual experience and further have a negative effect on other visual applications. Data-driven approaches show tremendous potential for lighting up the image brightness while preserving its visual naturalness. [...] Read more.
Low-illumination images exhibit low brightness, blurry details, and color casts, which present us an unnatural visual experience and further have a negative effect on other visual applications. Data-driven approaches show tremendous potential for lighting up the image brightness while preserving its visual naturalness. However, these methods introduce hand-crafted holes and noise enlargement or over/under enhancement and color deviation. For mitigating these challenging issues, this paper presents a frequency division and multiscale learning network named FDMLNet, including two subnets, DetNet and StruNet. This design first applies the guided filter to separate the high and low frequencies of authentic images, then DetNet and StruNet are, respectively, developed to process them, to fully explore their information at different frequencies. In StruNet, a feasible feature extraction module (FFEM), grouped by multiscale learning block (MSL) and a dual-branch channel attention mechanism (DCAM), is injected to promote its multiscale representation ability. In addition, three FFEMs are connected in a new dense connectivity meant to utilize multilevel features. Extensive quantitative and qualitative experiments on public benchmarks demonstrate that our FDMLNet outperforms state-of-the-art approaches benefiting from its stronger multiscale feature expression and extraction ability. Full article
(This article belongs to the Special Issue Advances in Perceptual Quality Assessment of User Generated Contents)
Show Figures

Figure 1

Article
Dynamic Heterogeneous User Generated Contents-Driven Relation Assessment via Graph Representation Learning
Sensors 2022, 22(4), 1402; https://doi.org/10.3390/s22041402 - 11 Feb 2022
Cited by 1 | Viewed by 838
Abstract
Cross-domain decision-making systems are suffering a huge challenge with the rapidly emerging uneven quality of user-generated data, which poses a heavy responsibility to online platforms. Current content analysis methods primarily concentrate on non-textual contents, such as images and videos themselves, while ignoring the [...] Read more.
Cross-domain decision-making systems are suffering a huge challenge with the rapidly emerging uneven quality of user-generated data, which poses a heavy responsibility to online platforms. Current content analysis methods primarily concentrate on non-textual contents, such as images and videos themselves, while ignoring the interrelationship between each user post’s contents. In this paper, we propose a novel framework named community-aware dynamic heterogeneous graph embedding (CDHNE) for relationship assessment, capable of mining heterogeneous information, latent community structure and dynamic characteristics from user-generated contents (UGC), which aims to solve complex non-euclidean structured problems. Specifically, we introduce the Markov-chain-based metapath to extract heterogeneous contents and semantics in UGC. A edge-centric attention mechanism is elaborated for localized feature aggregation. Thereafter, we obtain the node representations from micro perspective and apply it to the discovery of global structure by a clustering technique. In order to uncover the temporal evolutionary patterns, we devise an encoder–decoder structure, containing multiple recurrent memory units, which helps to capture the dynamics for relation assessment efficiently and effectively. Extensive experiments on four real-world datasets are conducted in this work, which demonstrate that CDHNE outperforms other baselines due to the comprehensive node representation, while also exhibiting the superiority of CDHNE in relation assessment. The proposed model is presented as a method of breaking down the barriers between traditional UGC analysis and their abstract network analysis. Full article
(This article belongs to the Special Issue Advances in Perceptual Quality Assessment of User Generated Contents)
Show Figures

Figure 1

Back to TopTop