Evaluation of Wirelessly Transmitted Video Quality Using a Modular Fuzzy Logic System
Round 1
Reviewer 1 Report
Paper is interesting and now some idea of performance of the method can now be reached from the revised paper. However, you should still compare the method with the state-of-the-art. From the link* you can find many state-of-the-art image and video quality algorithms and publications. Also I see that you should use more videos for testing the performance of your method.
Also you should discuss how subjective video quality assessment should be done. Now you showed one frame (still image) at time to the observers and observers gave still image quality ratings. Subjective video quality evaluation should be done by showing test video by a some video player. Observers give some continuous evaluation score when watching video or single overall quality value (or quality attribute values) after video ends.
*http://live.ece.utexas.edu/research/Quality/index_algorithms.htm
Author Response
Dear Reviewer
Thank you for so kindly supporting us through out the reviewer process of our paper. Your suggestions have resulted in significant important of the paper.
We believe that all suggested changes are now adequately integrated in the revised paper and we hope these changes meet your expectation. Our response to your comments are indicated below.
Yours faithfully
Professor Reza Saatchi
Changes made:
All new texts are highlighted by yellow colour.
We have added four new references: 28,29,30,31. The previous references 28, 29, 30 and 31 become 32, 33, 34 and 35 in new text.
Comments 1:
Paper is interesting and now some idea of performance of the method can now be reached from the revised paper. However, you should still compare the method with the state-of-the-art. From the link* you can find many state-of-the-art image and video quality algorithms and publications. Also I see that you should use more videos for testing the performance of your method.
An independent comparison of the video quality assessment method developed in our study against a method (state of the art) that uses spatial efficient entropic differencing was carried out and consistent results were obtained. This state of the art method was chosen as it is the latest amongst the state of the art methods. The associated results are included in the paper.
Our study has made a significant contribution to the state of knowledge for evaluating quality of videos transmitted wirelessly. Some points are highlighted below:
A limitation of current objective QoE methods is that they typically rely on peak signal to noise ratio (PSNR), structural similarity index (SSIM) or video quality metric (VQM) which do not always provide consistent assessment [1,9,26,29]. Our method integrates three video parameters incorporate Image Distance (ID). When frames are lost during transmission, the order of frames sent and received no longer matches. The resulting mismatch in the frame sequence number results in inaccuracies when comparing the original and received frames to establish video quality [24,28]. This issue has been dealt with in our study by inserting labels that allows comparisons between received and original to be carried correctly. When dealing with wireless computer networks where interference and other contextual factors affect network services, QoS assessment on its own may be insufficient [27]. Thus, the performance evaluation of a lossy wireless network needs to take into account not only the physical network characteristics (QoS) but also how these affect the end-user application (QoE). Thus integrating QoS and QoE as implemented in our study is valuable. We considered network parameters delay, jitter and %PLR in addition to video parameters PSNR, SSIM and ID. Sampling performed which reduce processing time. Successful development of a novel modularly structured fuzzy logic based system to assess wirelessly transmitted video quality. Comparison of the fuzzy logic results with the MOS results obtained by enrolling human participants.The developed reported in our paper works with any video type. The testing in our study was performed using the NetEm tool which is an emulation software that change network parameters delay, jitter and PLR. Inclusion of further videos will produce comparable results to those already in the paper. The paper already has extensive results and adding further results will not add to its value.
Comments 2:
Also you should discuss how subjective video quality assessment should be done. Now you showed one frame (still image) at time to the observers and observers gave still image quality ratings. Subjective video quality evaluation should be done by showing test video by a some video player. Observers give some continuous evaluation score when watching video or single overall quality value (or quality attribute values) after video ends.
The duration of the video was 90 seconds, corresponding to 90 images. The distorted video was initially shown to each participant. As scoring of the individual images while the video was played was not practical, the images were shown individually using windows photo viewers tool, and once the scoring was achieved, the next image was displayed.
Reviewer 2 Report
No more comments.
Author Response
Dear Reviewer
We are very grateful for your support throughout reviewing our paper. Your suggestions were very valuable and they are very much appreciated by us.
Many thanks
Yours faithfully
Professor Reza Saatchi
Round 2
Reviewer 1 Report
One minor comment. You should calculate some performance values when comparing proposed and the state-of-the-art metrics. See table IV from your ref. 31.
Author Response
Dear Reviewer
We have followed your valued instructions and added a table and associated plots and discussions on pages 15 and 16 of the paper. These extra changes are highlighted yellow.
We hope the addition will meet your expectation.
We are grateful for your valued and constructive reviewing of our paper and the paper now is more complete as a result of your kind suggestions.
Best regards
Professor R Saatchi
This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.
Round 1
Reviewer 1 Report
The paper devised a novel modular fuzzy logic based system to determine quality of a video transmitted over computer networks. The system has broad applications for various videos. Novel aspects of the work are related to the manners the quality of service and quality of experience are quantified and combined together to indicate the quality of received video. The study also devised a novel way of determining image quality by image partitioning and showed that it provided a significantly better indication of image quality than the intact (not partitioned) image. The devised methods have been carefully tested and evaluated on a real wireless network. They have demonstrated the efficacy of the devised approaches clearly and accurately. The devolved techniques have broad applications in multimedia communication networks and will be of great interest to the readers of the Technologies journal. The related studies and methodology used are explained clearly. The results are carefully presented and explained. The work presented in the paper is novel and has merit for publication in a high impact journal. The authors have a good track record publishing in this field, some of their papers are referenced (references 2, 18, 19, 20). Reference 20 has been published in the Technogym Journal.
Correction: The abstract, the sentence
Current Sentence: “In this study modular fuzzy logic based systems were developed to quantify the quality of video transmission over a wireless computer network.”
Should be changed to
In this study a modular fuzzy logic based system was developed to quantify the quality of video transmission over a wireless computer network.
Author Response
Dear Reviewer
Thank you for so kindly reviewing our paper and providing us with very valuable constructive comments and suggestions.
We have very carefully revised the paper, taking on board your comments and the comments of the other 3 reviewers.
The paper is now significantly improved.
The details of the revisions made are included in the attached document (Reviewer.doc).
We hope the amendments meet your expectation.
Yours faithfully
Professor Reza Saatchi
Sheffield Hallam University, United Kingdom.
Author Response File: Author Response.pdf
Reviewer 2 Report
Fuzzy logic is interesting approach, but you should prove the performance of the method. From the manuscript it was difficult to understand the novelty of the proposed method and how it is better than the state-of-the-art. My opinion is that the journal publication requires novel method with higher performance than the state-of-the-art.
The performance should be measured using many test videos for different contents and subjective evaluations (e.g. 30 observers). That is, you should calculate e.g. correlation values between subjective and objective evaluations. Then you can compare the correlation values with the proposed metric and the state-of-the-art. If the values of the proposed method are statistically higher than the state-of-the-art you can publish your results. From the link* you can find many state-of-the-art image and video quality algorithms and publications.
*http://live.ece.utexas.edu/research/Quality/index_algorithms.htm
Author Response
Dear Reviewer
Thank you for so kindly reviewing our paper and providing us with very valuable constructive comments and suggestions.
We have very carefully revised the paper, taking on board your comments and the comments of the other 3 reviewers.
The paper is now significantly improved.
The details of the revisions made are included in the attached document (Reviewer.doc).
We hope the amendments meet your expectation.
Yours faithfully
Professor Reza Saatchi
Sheffield Hallam University, United Kingdom.
Author Response File: Author Response.pdf
Reviewer 3 Report
The essential flaw of the paper is the missing evaluation of the introduced fuzzy logic system. Without such an evaluation it is impossible to assess the usefulness of it. The authors should at least state advantages of their video evaluation approach compared to other schemes (there is so much existing work on QoE evaluation, and even specialized workshops such as QoMex). Ideally, compare the results to other QoE metrics (e.g. VMAF, which also combines SSIM and PSNR). Or perform a subjective user study, to assess whether the outputs of the rules match the human perception.
Some minor sugestions include:
- The authors introduced PSNR, SSIM quite extensively. I believe interested readers would already know them. Hence, this introduction is redundant.
- instead of visual frame numbering, the authors could get the frame number information directly from the RTP metadata
- sampling: fixed-interval sampling of one second is oversimplified. In literature many QoE-based frame extraction processes (based on amount of motion, etc.) have been introduced that take into account the importness of a frame. Such advanced techniques could be used when to select frames that are used as input for the fuzzy system.
Author Response
Dear Reviewer
Thank you for so kindly reviewing our paper and providing us with very valuable constructive comments and suggestions.
We have very carefully revised the paper, taking on board your comments and the comments of the other 3 reviewers.
The paper is now significantly improved.
The details of the revisions made are included in the attached document (Reviewer.doc).
We hope the amendments meet your expectation.
Yours faithfully
Professor Reza Saatchi
Sheffield Hallam University, United Kingdom.
Author Response File: Author Response.pdf
Reviewer 4 Report
The authors approach an interesting and open problem to evaluate the video quality in a wireless environment.
Nevertheless, the content must be improved in order to enrich the contribution and the proposal description, for instance:
Comment 1: It is relevant to quantify the video quality in a wireless environment but it is important to mention that each application that sends multimedia data has different QoS multimedia requirements. So, how the proposed solution can be adapted? Even when the authors assume that the multimedia factors are using the ITU recommendations for multimedia transmission these parameters should be configured according to specific needs.
Comment 2: An analysis of several scenarios should be considered to determine the behavior of the fuzzy logic system. The showed example implies that the QoS have been modified in order to show the decrease of the network conditions in an interval time, but how susceptible about the QoE conditions is the fuzzy model proposed? This question arises from the analysis depicted between considering the full image and the image partitioned, why only four partitions? Is it possible to establish a correlation for several requirements?
Comment 3: Can the interval used for the systematic sampling being limited to this range? If it so, why? Or, which are the conditions that must be considered to determine the systematic sampling?
Comment 4: Table 3 must be reviewed because the fifth column should be eliminated.
Author Response
Dear Reviewer
Thank you for so kindly reviewing our paper and providing us with very valuable constructive comments and suggestions.
We have very carefully revised the paper, taking on board your comments and the comments of the other 3 reviewers.
The paper is now significantly improved.
The details of the revisions made are included in the attached document (Reviewer.doc).
We hope the amendments meet your expectation.
Yours faithfully
Professor Reza Saatchi
Sheffield Hallam University, United Kingdom.
Author Response File: Author Response.pdf
Round 2
Reviewer 2 Report
I can't recommend publication before comprehensive performance test. As I recommend my earlier comments:
"The performance should be measured using many test videos for different contents and subjective evaluations (e.g. 30 observers). That is, you should calculate e.g. correlation values between subjective and objective evaluations. Then you can compare the correlation values with the proposed metric and the state-of-the-art. If the values of the proposed method are statistically higher than the state-of-the-art you can publish your results. From the link* you can find many state-of-the-art image and video quality algorithms and publications.
*http://live.ece.utexas.edu/research/Quality/index_algorithms.htm
"
Reviewer 3 Report
The paper still lacks an evaluation of the introduced system. As I pointed out in the previous review, there is current state of the work (other evaluation metrics/systems). What are the benefits of the presented metrics compared to the SotA and how well does the introduced system align with human perception? These questions can only be answered performing additional evaluation studies (objective and subjective).