Special Issue "Advances in Perceptual Quality Assessment of User Generated Contents"
Deadline for manuscript submissions: 20 February 2023 | Viewed by 2381
Interests: image processing; visual quality assessment; computer vision; human vision
Interests: image quality assessment, video quality assessment, quality of experience, saliency, multimedia signal processing
Interests: image quality signal processing, medical imaging, hyperspectral imaging, image processing, agricultural engineering
Due to the rapid development of mobile devices and wireless networks in recent years, creating, watching and sharing user-generated content (UGC) through various applications such as social media has become a popular daily activity for the general public. User-generated content in these applications exhibits markedly different characteristics than conventional, professionally generated content (PGC). Unlike professionally generated content, user-generated content is generally captured in the wild by ordinary people using diverse capture devices, and may suffer from complex real-world distortions, such as overexposure, underexposure, camera shakiness, etc., which also pose challenges for quality assessment. On one hand, an effective quality assessment (QA) model to evaluate the perceptual quality of user-generated contents can help the service provider recommend high-quality contents to users, and on the other hand can guide the development of more effective content processing algorithms.
Although subjective and objective quality assessments have been carried out in this area for many years, most of them focused on professionally generated content, without considering the specific characteristics of user-generated content. This Special Issue seeks original submissions and the latest technologies concerning the perceptual quality assessment of user-generated content, including—but not limited to—image/video/audio quality assessment databases/metrics for user-generated content, perceptual processing, compression, enhancement, and distribution of user-generated contents. Submissions pertaining to related practical applications and model development for user-generated content are also welcome.
Prof. Dr. Guangtao Zhai
Dr. Xiongkuo Min
Dr. Menghan Hu
Dr. Wei Zhou
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
- user-generated content
- perceptual quality
- image/video/audio quality assessment
- image analysis and image processing
- video/audio signal processing
- user-generated content based on a sensing system