# Objective Video Quality Assessment Based on Machine Learning for Underwater Scientific Applications

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

## 3. Subjective Dataset

_{space}operator computes standard deviation of luminance values within a single pixel matrix. The max

_{time}operator selects the maximum value of the argument (spatial standard deviation for a pixel matrix in both cases) over the set of all processed video frames in the clip. The features of the video clips selected for this paper are shown in Table 1. Other video features kept constant for all the clips were H.264 compression format, RGB (24 bits) color and QVGA (320 × 240) resolution. Clips are grouped for comparison purposes in two blocks according to their content variation: a high variation content (HVC) block and a low variation content (LVC) block.

## 4. G.1070 Model Suitability Study

_{Ofr}is the maximum video quality for a given bitrate and D

_{Fr}is the degree of robustness due to the frame rate. This model does not take into account content variation and therefore SI, TI information must be discarded:

_{Ofr}and D

_{Fr}, based on frame rate values. However, the LSA for our subjective data cannot be solved in the real domain as shown in Table 2. For the high variation content, the imaginary part of the intermediate parameters could be considered negligible since it is nine orders of magnitude smaller than the real part and thus the coefficients can be calculated with another LSA approximation. Table 3 contains these results along with the goodness of fit (GOF) standard measures: the sum of squares due to error (SSE), the R square (R

^{2}) and the root mean squared error (RMSE). All of them indicate a very poor fit quality with a negative R

^{2}showing that even a simple linear regression (plane) would be more appropriate for the data. The poor performance of this model could be attributed to the fact that it was designed for a very specific application (video telephony) which greatly differs from underwater video services in several important aspects such as video content and features, purpose of the video service and user expectancies. These differences can considerably change user perception of quality.

## 5. NR Parametric Model (Surface Fitting Non-Linear Regression)

#### 5.1. Deriving the Model

^{2}= 1) for the given control points. The minimization constraint produces a smooth surface (minimally “bended”) which matches the assumption of no great variations in quality values between the studied input variables values. The surface can be defined as in (8), a weighted sum of the radial basis function in (9) where x

^{(i)}are the control points, K is the number of points and a

_{i}, w

_{i}are the optimization parameters. In this case, the control points are the samples from our subjective dataset, considering the bitrate as our first dimension or feature (x

_{1}) and the framerate as the second feature (x

_{2}). The resulting MOS for a given sample is y

^{(i)}. This interpolation technique produces a representative surface but not the practical model we aim for, since the complexity of the resulting equation makes it difficult to interpret the coefficients. Figure 1 shows three surface plots of thin plate splines fitting the subjective dataset. Figure 1a is obtained from points in the high variation content (HVC) block as control points, while points in the low variation content (LVC) and reduced low variation content (rLVC) blocks are for Figure 1b,c, respectively. The shapes of the HVC and rLVC surfaces are very similar. Even the LVC surface could be regarded as reasonably similar, except for the bending forced by the anomalies already mentioned in Section 3. Our model proposal in the Section 5.2 is motivated by the resemblance between this geometrical profile and a sigmoid function.

#### 5.2. Model Equations and Discussion

- Generalization—Non-linear regression model (NLR.G).Equation (11) achieves a more consistent behavior of the model outside the range of the subjective dataset. The asymptotes of the surface are set to the limits of the quality scoring scale (1–5).
- Accuracy—Non-linear regression model (NLR.A).Equation (12) achieves a better fitting for the points in the subjective dataset (higher R
^{2}):$$f\left(x\right)=L+\frac{U-L}{{\left(A+B{e}^{-Cx}\right)}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$v$}\right.}}\text{},$$$$f\left(x\right)=1+\frac{4{\left(A\right)}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$v$}\right.}}{{\left(A+B{e}^{-\left({c}_{0}+{c}_{1}{x}_{1}+{c}_{2}{x}_{2}\right)}\right)}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$v$}\right.}}\text{},$$$$f\left(x\right)=L+\frac{K}{{\left(A+B{e}^{-\left({c}_{0}+{c}_{1}{x}_{1}+{c}_{2}{x}_{2}\right)}\right)}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$v$}\right.}}\text{}.$$

_{i}, υ are optimized with the non-linear least squares method applied to our subjective dataset. The coefficients computed for every block can be found in Table 4 for the NLR.G model and in Table 5 for the NLR.A model. The corresponding goodness-of-fit statistics (SSE, R

^{2}, RMSE) are shown in Table 6 and Table 7. These values are used as a performance metric of the model. Figure 2 contains plots for each model surface: NLR.G surfaces are in the left column while the right column shows the NLR.A surfaces.

^{2}≈ 0.88) and excellent for the NLR.A model (R

^{2}≈ 0.98). The performance of the model is poor (R

^{2}≤ 0.6) for the low variation content videos because the model cannot fit the non-sigmoid shape of the cloud of points. However, the performance dramatically rises to the levels of high variation content for the reduced low variation content dataset, with an excellent performance of both NLR.G (R

^{2}≈ 0.91) and NLR.A (R

^{2}≈ 0.94). The RMSE value is also considerably low for both blocks (RMSE ≤ 0.5), taking into account that it is not being averaged over the total number of samples but over the difference between the number of samples and the number of parameters in the model.

## 6. RR Hybrid Model (Ordinal Logistic Regression)

_{1}), the framerate (x

_{2}), the SI (x

_{3}) and the TI (x

_{4}). We call π

_{i}(x) the probability of the observation x to be in the i-th category. For k categories of the outcome variable, the method computes the k−1 logarithms of the odd ratios or logits, i.e., the logarithms of the probability of being in a given category or any category below (γ

_{j}) divided by the probability of being in any superior category.

_{j}for every logit but the same coefficients β for all the predictors (13). The π

_{i}(x) probabilities are obtained from the model as in (14). An estimator for the MOS is proposed in (15). Even though our target was departing from the MOS simplistic approach to QoE assessment, the MOS estimator can still be useful for comparing with other models:

- Compute a model including every possible interaction except the ones that have been discarded in a previous iteration.
- Check the p-value of the coefficient for every interaction term of i-th order (or main effect if i = 1). If p > 0.05, the interaction is considered non-significant and thus removed from subsequent iterations.

^{2}tests. The “model fitting test” is a Likelihood Ratio χ

^{2}test between a model with only the intercept term and the final model. The p-value for our model is significant (p = 0.005) and indicates that the final model fits the dataset better than a model with constant odds based on the marginal probabilities of each outcome category. The “parallel lines test” is an analogy between the final model and a multinomial model where no natural ordering is considered between categories and therefore different β coefficients are obtained for every logit estimator. The p-value is non-significant (p = 1.00) and thus there is no evidence to reject the assumption of proportional odds. Several R

^{2}values are provided in Table 10.

^{2}value for the MOS

_{OLR}estimator and the subjective MOS values in the dataset, resulting in a 90% of the variance explained by our estimator. This result is similar to the R

^{2}obtained with the NR models. Pseudo-R

^{2}values are also included in our results as Cox and Snell [37], Nagelkerke [38] and McFadden [39]. These pseudo-R

^{2}values, as discussed in [40], cannot be interpreted like a classic R

^{2}in a least squares regression since they do not provide a comparison between the predicted values and those in the dataset, but between the fitted model and the only-intercept model described above. However, they serve to compare different models.To provide a graphical approach to the goodness-of-fit, Figure 3 plots the category probability distribution P

_{i}for every observation in our dataset as estimated by the OLR model against the proportions of scores computed from the subjective data π

_{i}. It can be observed how the model provides a very good fit for most cases, with an excellent performance for some of the observations (IDs 05 and 15) and only a small amount of higher errors (category 3 in IDs 01 and 03, and category 2 in ID 09). In particular, 71.1% of the π

_{i}estimations show a deviation smaller than 0.1.

_{i}(x)) and max(P

_{i}(x)) as the classification decision, the accuracy of the classification method is 83.3%.

## 7. Conclusions

^{2}≈ 0.9) and can be used for network planning applications but also to obtain a fast, lightweight processing estimation of the quality for real-time adaptation. The second model is a reduced reference method with a similar performance in terms of MOS prediction (R

^{2}≈ 0.9) but it further explores the concept of quality estimation. This technique, built upon ordinal logistic regression, is capable of predicting the distribution of user scores and thus provides a full characterization of quality beyond the simplistic common MOS statistic. This approach has not been previously applied to video quality assessment and delivers a more reliable way to assess user satisfaction and quality of experience. Future work is still required to determine the reliability of our models with other experimental datasets built with underwater video and subjective quality scores. However, there are currently no other datasets with these features publicly available to perform this comparison. Further research effort is also required to increase the number of underwater video quality databases.

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Evologics GmbH. S2C M HS Modem Product Information. Available online: http://www.evologics.de/en/products/acoustics/s2cm_hs.html (accessed on 13 March 2017).
- Moreno-Roldán, J.-M.; Luque-Nieto, M.-Á.; Poncela, J.; Díaz-del-Río, V.; Otero, P. Subjective Quality Assessment of Underwater Video for Scientific Applications. Sensors
**2015**, 15, 31723–31737. [Google Scholar] [CrossRef] [PubMed] - Scott, M.J.; Guntuku, S.C.; Lin, W.; Ghinea, G. Do Personality and Culture Influence Perceived Video Quality and Enjoyment? IEEE Trans. Multimedia
**2016**, 18, 1796–1807. [Google Scholar] [CrossRef] - Takahashi, A.; Hands, D.; Barriac, V. Standardization activities in the ITU for a QoE assessment of IPTV. IEEE Commun. Mag.
**2008**, 46, 78–84. [Google Scholar] [CrossRef] - Apostolopoulos, J.G.; Reibman, A.R. The Challenge of Estimating Video Quality in Video Communication Applications [In the Spotlight]. IEEE Signal Process. Mag.
**2012**, 29, 160–158. [Google Scholar] [CrossRef] - Narwaria, M.; Lin, W.; Liu, A. Low-Complexity Video Quality Assessment Using Temporal Quality Variations. IEEE Trans. Multimedia
**2012**, 14, 525–535. [Google Scholar] [CrossRef] - Saad, M.A.; Bovik, A.C.; Charrier, C. Blind Image Quality Assessment: A Natural Scene Statistics Approach in the DCT Domain. IEEE Trans. Image Process.
**2012**, 21, 3339–3352. [Google Scholar] [CrossRef] [PubMed] - Sogaard, J.; Forchhammer, S.; Korhonen, J. Video quality assessment and machine learning: Performance and interpretability. In Proceedings of the 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX), Pylos-Nestoras, Greece, 26–29 May 2015; pp. 1–6. [Google Scholar]
- Kawayoke, Y.; Horita, Y. NR objective continuous video quality assessment model based on frame quality measure. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; pp. 385–388. [Google Scholar]
- Anegekuh, L.; Sun, L.; Jammeh, E.; Mkwawa, I.H.; Ifeachor, E. Content-Based Video Quality Prediction for HEVC Encoded Videos Streamed Over Packet Networks. IEEE Trans. Multimedia
**2015**, 17, 1323–1334. [Google Scholar] [CrossRef] - Yamagishi, K.; Hayashi, T. Video-Quality Planning Model for Videophone Services. ITE
**2008**, 62, 1050–1058. [Google Scholar] [CrossRef] - You, F.; Zhang, W.; Xiao, J. Packet Loss Pattern and Parametric Video Quality Model for IPTV. In Proceedings of the Eighth IEEE/ACIS International Conference on Computer and Information Science (ICIS), Shanghai, China, 1–3 June 2009; pp. 824–828. [Google Scholar]
- Raake, A.; Garcia, M.N.; Moller, S.; Berger, J.; Kling, F.; List, P.; Johann, J.; Heidemann, C. T-V-model: Parameter-based prediction of IPTV quality. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 1149–1152. [Google Scholar]
- Koumaras, H.; Kourtis, A.; Martakos, D.; Lauterjung, J. Quantified PQoS assessment based on fast estimation of the spatial and temporal activity level. Multimedia Tools Appl.
**2007**, 34, 355–374. [Google Scholar] [CrossRef] - Ries, M.; Crespi, C.; Nemethova, O.; Rupp, M. Content based video quality estimation for H.264/AVC video streaming. In Proceedings of the 2007 IEEE Wireless Communications and Networking Conference, Kowloon, China, 11–15 March 2007; pp. 2668–2673. [Google Scholar]
- Gustafsson, J.; Heikkila, G.; Pettersson, M. Measuring multimedia quality in mobile networks with an objective parametric model. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; pp. 405–408. [Google Scholar]
- Khan, A.; Sun, L.; Ifeachor, E. Content-based video quality prediction for MPEG4 video streaming over wireless networks. J. Multimedia
**2009**, 4, 228–239. [Google Scholar] [CrossRef] - Huynh-Thu, Q.; Ghanbari, M. Temporal Aspect of Perceived Quality in Mobile Video Broadcasting. IEEE Trans. Broadcast.
**2008**, 54, 641–651. [Google Scholar] [CrossRef] - Ou, Y.F.; Ma, Z.; Liu, T.; Wang, Y. Perceptual Quality Assessment of Video Considering Both Frame Rate and Quantization Artifacts. IEEE Trans. Circuits Syst. Video Technol.
**2011**, 21, 286–298. [Google Scholar] [CrossRef] - Joskowicz, J.; Ardao, J.C.L. Combining the effects of frame rate, bit rate, display size and video content in a parametric video quality model. In Proceedings of the 6th Latin American Networking Conference, Quito, Ecuador, 12–13 October 2011; pp. 4–11. [Google Scholar]
- Joskowicz, J.; Sotelo, R.; Ardao, J.C.L. Towards a General Parametric Model for Perceptual Video Quality Estimation. IEEE Trans. Broadcast.
**2013**, 59, 569–579. [Google Scholar] [CrossRef] - Opinion Model for Video-Telephony Applications. ITU-T Recommendation G.1070. 2012. Available online: https://www.itu.int/rec/T-REC-G.1070 (accessed on 14 July 2016).
- Le Callet, P.; Viard-Gaudin, C.; Barba, D. A Convolutional Neural Network Approach for Objective Video Quality Assessmen. IEEE Trans. Neural Netw.
**2006**, 17, 1316–1327. [Google Scholar] [CrossRef] [PubMed] - Edwards, C. Growing pains for deep learning. Commun. ACM
**2015**, 58, 14–16. [Google Scholar] [CrossRef] - Shahid, M.; Rossholm, A.; Lövström, B. A no-reference machine learning based video quality predictor. In Proceedings of the 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX), Klagenfurt am Wörthersee, Austria, 3–5 July 2013; pp. 176–181. [Google Scholar]
- Hameed, A.; Dai, R.; Balas, B. A Decision-Tree-Based Perceptual Video Quality Prediction Model and its Application in FEC for Wireless Multimedia Communications. IEEE Trans. Multimedia
**2016**, 18, 764–774. [Google Scholar] [CrossRef] - Hoßfeld, T.; Heegaard, P.E.; Varela, M. QoE beyond the MOS: Added value using quantiles and distributions. In Proceedings of the 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX), Pylos-Nestoras, Greece, 26–29 May 2015; pp. 1–6. [Google Scholar]
- Song, W.; Tjondronegoro, D.W. Acceptability-Based QoE Models for Mobile Video. IEEE Trans. Multimedia
**2014**, 16, 738–750. [Google Scholar] - Methodology for the Subjective Assessment of the Quality of Television Pictures. ITU-R Recommendation BT.500-13. 2012. Available online: https://www.itu.int/rec/R-REC-BT.500 (accessed on 14 July 2016).
- Subjective Video Quality Assessment Methods for Multimedia Applications. ITU-T Recommendation P.910. 2008. Available online: https://www.itu.int/rec/T-REC-P.910 (accessed on 14 July 2016).
- Duda, R.O.; Hart, P. Representation and initial simplifications. In Pattern Classification and Scene Analysis; John Wiley and Sons: New York, NY, USA, 1973; pp. 271–272. [Google Scholar]
- Bookstein, F.L. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell.
**1989**, 11, 567–585. [Google Scholar] [CrossRef] - Richards, F.J. A Flexible Growth Function for Empirical Use. J. Exp. Bot.
**1959**, 10, 290–300. [Google Scholar] [CrossRef] - McCullagh, P. Regression Models for Ordinal Data. J. R. Stat. Soc. Ser. B (Methodol.)
**1980**, 42, 109–142. [Google Scholar] - McCullagh, P.; Nelder, J.A. Models for polytomous data. In Generalized Linear Models; Chapman & Hall: London, UK, 1989; pp. 151–155. [Google Scholar]
- IBM Corp. IBM SPSS Statistics Base 22. Software—Manual. Available online: ftp://public.dhe.ibm.com/software/analytics/spss/documentation/statistics/22.0/en/client/Manuals/ (accessed on 22 March 2017).
- Cox, D.R.; Snell, E.J. Analysis of Binary Data, 2nd ed.; Chapman & Hall: London, UK, 1989. [Google Scholar]
- Nagelkerke, N.J.D. A note on a general definition of the coefficient of determination. Biometrika
**1991**, 78, 691–692. [Google Scholar] [CrossRef] - McFadden, D. Conditional logit analysis of qualitative choice behavior. In Frontiers in Econometrics; Zarembka, P., Ed.; Academic Press: San Diego, CA, USA, 1974; pp. 105–142. [Google Scholar]
- Hosmer, W.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression; John Wiley and Sons: Hooboken, NJ, USA, 2013. [Google Scholar]

**Figure 2.**NR model surfaces. (

**a**) NLR.G–HVC, (

**b**) NLR.A–HVC, (

**c**) NLR.G–LVC, (

**d**) NLR.A–LVC, (

**e**) NLR.G–rLVC, (

**f**) NLR.A–rLVC. Note that the bitrate axis in (

**a**,

**c**,

**e**) has been extended to show the generalization behavior.

Block | ID | B_{r} (kbps) | F_{r} (fps) | SI | TI |
---|---|---|---|---|---|

LVC | 01 | 8 | 1 | 23.95 | 4.46 |

02 | 8 | 5 | 23.87 | 4.19 | |

03 | 8 | 10 | 24.35 | 4.38 | |

04 | 14 | 1 | 30.35 | 4.58 | |

05 | 14 | 5 | 27.66 | 4.16 | |

06 | 14 | 10 | 29.35 | 6.24 | |

07 | 20 | 1 | 41.46 | 7.18 | |

08 | 20 | 5 | 36.87 | 9.20 | |

09 | 20 | 10 | 39.23 | 7.13 | |

HVC | 10 | 8 | 1 | 67.13 | 15.42 |

11 | 8 | 5 | 75.43 | 13.96 | |

12 | 8 | 10 | 57.69 | 13.46 | |

13 | 14 | 1 | 71.11 | 15.92 | |

14 | 14 | 5 | 66.52 | 13.92 | |

15 | 14 | 10 | 76.33 | 18.05 | |

16 | 20 | 1 | 71.11 | 15.92 | |

17 | 20 | 5 | 60.21 | 11.20 | |

18 | 20 | 10 | 53.95 | 10.15 |

Br (kbps) | 8 | 14 | 20 | |
---|---|---|---|---|

LVC | Ofr | 1.013 × 10^{−7} | –30.7385 − 7.813 × 10^{−8} i | 2.969 − 1.138 × 10^{−13} i |

I_{Ofr} | 31.826 | 2.219 − 0.07 i | 3.336 − 1.206 × 10^{−13} i | |

D_{Fr} | 6.906 | 14.01 + 2.155 i | 1.05 − 8.857 × 10^{−14} i | |

HVC | Ofr | 1.878 | 1.101 | 0.682 |

I_{Ofr} | 4.955 | 2.204 | 0.577 | |

D_{Fr} | 2.43 + 1.066 × 10^{−9} i | 1.688 − 5.507 × 10^{−9} i | 1.811 + 6.434 × 10^{−9} i |

v_{1} | v_{2} | v_{3} | v_{4} | v_{5} | v_{6} | v_{7} |

2.445 | 0.0459 | 1.946 | 7.935 | 32.431 | –0.294 | 0.094 |

SSE | R2 | RMSE ^{1} | ||||

36.9130 | −0.0561 | 2.5906 |

^{1}RMSE averaged over the difference between the number of samples and the number of parameters in the model.

Block | A | c_{0} | c_{1} | c_{2} | ν |
---|---|---|---|---|---|

HVC | 6.994 | 5.569 | 0.0977 | –0.1512 | 3.623 × 10^{−4} |

LVC | 487.1 | –1.008 | 0.05259 | −0.05686 | 5.195 × 10^{−3} |

rLVC | 23.33 | –15.31 | 0.7495 | –1.224 | 10.37 |

Block | L | K | A | B |

HVC | 1.291 | 3.518 | 1.539 | 2.411 |

LVC | 2.505 | 7.83 | 3.864 | 11.11 |

rLVC | 1.933 | 2.264 | 1.362 | 4.158 |

Block | c_{0} | c_{1} | c_{2} | ν |

HVC | −1.952 | 0.6349 | −0.9421 | 1.013 |

LVC | −16.62 | 3.128 | −6.671 | 0.7034 |

rLVC | −9.609 | 1.063 | −1.906 | 5.672 |

Block | SSE | R2 | RMSE ^{1} |
---|---|---|---|

HVC | 0.959 | 0.8809 | 0.4896 |

LVC | 2.687 | 0.3945 | 0.9186 |

rLVC | 0.3916 | 0.9084 | 0.4425 |

^{1}RMSE averaged over the difference between the number of samples and the number of parameters in the model.

Block | SSE | R2 | RMSE ^{1} |
---|---|---|---|

HVC | 0.116 | 0.9856 | 0.34 |

LVC | 1.936 | 0.5637 | 1.391 |

rLVC | 0.2609 | 0.939 | – |

^{1}RMSE averaged over the difference between the number of samples and the number of parameters in the model.

Category/Logit | Coefficient | Value |

$\frac{\mathrm{bad}}{\mathrm{poor}\text{}\mathrm{or}\text{}\mathrm{better}}$ | θ_{1} | 6.839 |

$\frac{\mathrm{poor}\text{}\mathrm{or}\text{}\mathrm{worse}}{\mathrm{fair}\text{}\mathrm{or}\text{}\mathrm{better}}$ | θ_{2} | 8.891 |

$\frac{\mathrm{fair}\text{}\mathrm{or}\text{}\mathrm{worse}}{\mathrm{good}\text{}\mathrm{or}\text{}\mathrm{better}}$ | θ_{3} | 11.066 |

$\frac{\mathrm{good}\text{}\mathrm{or}\text{}\mathrm{worse}}{\mathrm{excellent}}$ | θ_{4} | 13.097 |

Effect/Interaction | ||

Framerate | β_{1} | 0.333 |

SI | β_{2} | −0.871 |

TI | β_{3} | 0.607 |

Bitrate*Framerate | β_{4} | −0.083 |

Bitrate*SI | β_{5} | 0.024 |

Framerate*SI | β_{6} | 0.090 |

Framerate*TI | β_{7} | −0.318 |

SI*TI | β_{8} | 0.037 |

Bitrate*SI*TI | β_{9} | −0.002 |

Test | −2 Log Likelihood | χ^{2} | df * | p | |
---|---|---|---|---|---|

Model fitting | Intercept only | 485.514 | – | – | – |

Final ** | 235.726 | 249.788 | 9 | <0.005 | |

Parallel lines | Null hypotesis ** | 235.726 | – | – | – |

General | 232.135 | 3.591 | 27 | 1.000 |

p-R^{2}–C&S * | p-R^{2}–N ** | p-R^{2}–M *** | R^{2}–MOS_{OLR} |
---|---|---|---|

0.484 | 0.509 | 0.220 | 0.90 |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Moreno-Roldán, J.-M.; Luque-Nieto, M.-Á.; Poncela, J.; Otero, P.
Objective Video Quality Assessment Based on Machine Learning for Underwater Scientific Applications. *Sensors* **2017**, *17*, 664.
https://doi.org/10.3390/s17040664

**AMA Style**

Moreno-Roldán J-M, Luque-Nieto M-Á, Poncela J, Otero P.
Objective Video Quality Assessment Based on Machine Learning for Underwater Scientific Applications. *Sensors*. 2017; 17(4):664.
https://doi.org/10.3390/s17040664

**Chicago/Turabian Style**

Moreno-Roldán, José-Miguel, Miguel-Ángel Luque-Nieto, Javier Poncela, and Pablo Otero.
2017. "Objective Video Quality Assessment Based on Machine Learning for Underwater Scientific Applications" *Sensors* 17, no. 4: 664.
https://doi.org/10.3390/s17040664