For (Trained) MLR with the training set used by Haus et al.
], the average MAPEs for the low, high and whole set are 30.88%, 27.59% and 29.96%, respectively. The maxima are 88.63%, 52.73% and 88.63% respectively. This shows that the Trained MLR model has similar predictive ability for the three sets, with a higher maximum error for the low set (88% vs.
52%). LOOCV MLR fared better than trained MLR, with average errors of 21.51%, 6.10% and 13.80%, and maxima of 70.12%, 21.26% and 70.12%, for the three sets respectively. Thus LOOCV MLR has a much higher predictive ability for the high end than trained MLR (6% vs.
27%). LOOCV K-Nearest Neighbors MLR performed similarly to LOOCV MLR with averages of 21.51%, 6.60% and 13.8%, and maxima of 79.2%, 25.73% and 79.2% respectively. LOOCV ANN had a little higher accuracy than Trained MLR for the high end, but not better than the previous methods. The averages were 22.98% for the low end, 7.11% high end, and 16.05% overall, while the maxima were 87.27%, 27.04%, and 87.27%. Continuous LOOCV MBR fared better than all of the previous methods on the low end with an average of 10.63% and a maximum of 47.62%. For the high and overall sets, MBR’s average errors were 37.57% and 23.89%, and maxima of 146.57% and 66.67%, respectively. Although Basu et al.
] used ANN with a different set of oils and different attributes, we were curious to compare their results to ours. The prediction accuracy obtained was inferior to all of our tested methods. Their methods had averages of 26.29%, 9, 2%, and 17%, and maxima of 133.64%, 24.76% and 133.64%, respectively. Compared to trained MLR [1
], the errors are higher, especially in terms of maxima. Finally, CART had a little easier time predicting the high end, but was inferior to the other methods that favored the high end. CART averages were 24.82%, 11.15%, and 17.88%, with maxima at 66.67%, 30.8% and 66.67% for the low, high, and overall data sets respectively.