Next Article in Journal
Quasi-Arithmetic Type Mean Generated by the Generalized Choquet Integral
Previous Article in Journal
The Study for Longitudinal Deformation of Adjacent Shield Tunnel Due to Foundation Pit Excavation with Consideration of the Retaining Structure Deformation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Fitting Process Using Robust Reliable Weighted Average on Near Infrared Spectral Data Analysis

by
Divo Dharma Silalahi
1,
Habshah Midi
2,3,*,
Jayanthi Arasan
2,3,
Mohd Shafie Mustafa
2,3 and
Jean-Pierre Caliman
1
1
SMART Research Institute (SMARTRI), PT. SMART TBK, Pekanbaru 28289, Indonesia
2
Institute for Mathematical Research, Universiti Putra Malaysia (UPM), Serdang 43400, Malaysia
3
Department of Mathematics, Faculty of Science, Universiti Putra Malaysia (UPM), Serdang 43400, Malaysia
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(12), 2099; https://doi.org/10.3390/sym12122099
Submission received: 12 November 2020 / Revised: 6 December 2020 / Accepted: 9 December 2020 / Published: 17 December 2020

Abstract

:
With the complexity of Near Infrared (NIR) spectral data, the selection of the optimal number of Partial Least Squares (PLS) components in the fitted Partial Least Squares Regression (PLSR) model is very important. Selecting a small number of PLS components leads to under fitting, whereas selecting a large number of PLS components results in over fitting. Several methods exist in the selection procedure, and each yields a different result. However, so far no one has been able to determine the more superior method. In addition, the current methods are susceptible to the presence of outliers and High Leverage Points (HLP) in a dataset. In this study, a new automated fitting process method on PLSR model is introduced. The method is called the Robust Reliable Weighted Average—PLS (RRWA-PLS), and it is less sensitive to the optimum number of PLS components. The RRWA-PLS uses the weighted average strategy from multiple PLSR models generated by the different complexities of the PLS components. The method assigns robust procedures in the weighing schemes as an improvement to the existing Weighted Average—PLS (WA-PLS) method. The weighing schemes in the proposed method are resistant to outliers and HLP and thus, preserve the contribution of the most relevant variables in the fitted model. The evaluation was done by utilizing artificial data with the Monte Carlo simulation and NIR spectral data of oil palm (Elaeis guineensis Jacq.) fruit mesocarp. Based on the results, the method claims to have shown its superiority in the improvement of the weight and variable selection procedures in the WA-PLS. It is also resistant to the influence of outliers and HLP in the dataset. The RRWA-PLS method provides a promising robust solution for the automated fitting process in the PLSR model as unlike the classical PLS, it does not require the selection of an optimal number of PLS components.

1. Introduction

The Near Infrared Spectroscopy (NIRS) has recently been attracting a lot of attention as a secondary analytical tool for quality control of agricultural products. In some applications (see [1,2,3,4,5]), it has been proven that the NIRS offers a non-destructive, reliable, accurate, and rapid tool, particularly for quantitative and qualitative assessments. Theoretically, NIRS is a type of vibrational spectroscopic that produces rich information in a spectral dataset as a result of the interaction between optical light and the physical matter of the sample. This spectral is commonly presented in terms of spectral absorbance using wide wavelengths that range from 350 nm to 2500 nm, primarily attributed to the overtone or combination bands of C-H (fats, oil, hydrocarbons), O-H (water), and N-H (protein) [6]. The NIR spectral data are classified as high dimension due to the large sample size and wide wavelength collected as a dataset. In spectral processing, chemometric methods have been utilized as the standard processing method (see [7,8,9]). The methods combine the mathematical and multivariate statistical methods in order to pre-process, examine, and understand as much relevant information as possible from the spectral data. Comparing some of the existing chemometric methods, the Partial Least Squares Regression (PLSR) seems to be the most preferred one [10,11,12].
PLSR decomposes both the spectral and reference information (from wet chemistry analysis), simultaneously. It has the ability to screen unwanted samples in a dataset as a result of experimental error and instrumentation problem [13], distribution-free assumption [14,15], and handling the multicollinearity in dataset [16]. However, despite having these benefits, several studies have reported its weakness due to its robustness. The fitted model performs poorly when outliers and leverage points are present in a dataset [17,18], as it fails to fit the nonlinear behavior in the input space [19,20]. In addition, the contamination of irrelevant variables involves during the fitting process [21,22,23] is a popular topic in most discussions. However, so far, less attention has been paid to the basic principles of PLSR in selecting the optimal number of Partial Least Squares (PLS) components which is crucial [24]. Applying fewer number of components produced under fitting, while applying a large number of components results in over fitting. Some methods available in the selection procedure are the cross-validation with one-sigma heuristic [25], permutation approach [26], bootstrap [27], smoothed PLS–PoLiSh [28], weight randomization test [29], and Monte Carlo resampling [30]. These different methods suggest different optimal numbers of the PLS components and to date, there has been no claim made as to which method is superior to the other. These methods suffer from the presence outliers and High Leverage Points (HLP) in the dataset. Consequently, recalculation of the number of PLS components used in the model is required each time the dataset is updated. This would result in different accuracy achievements and sometimes, misleading interpretations. It has been observed that there are only a few studies that have highlighted the robust process. As such, a robust PLSR with less sensitivity to the selection of optimal number of PLS components is needed. This study provides another perspective of applying a robust procedure in the PLSR model with regard to the selection number of the factors used in the fitted model.
The automated fitting process on PLSR model using weighted average strategy has been introduced in several papers (see [31,32,33]). The method is known as the Locally Weighted Average PLS [31,32] or simply called the Local-WA-PLS. The Local-WA-PLS is an extension of the Locally Weighted Regression [33] which is used to fit a local linear regression based on the classification of similarity between the calibration and testing (or unknown) sample. This similarity is classified using the famous Euclidean distance and Mahalanobis distance method. Although the Local-WA-PLS has been widely used, it has been reported that the method works adequately only with a large spectral dataset (see [34,35]). As an improvement, the modified method by Zhang et al. [35], the Weighted Average PLS (WA-PLS), is suggested as it uses a different weighting scheme that is computationally simpler and comparable to the Local-WA-PLS. However, both methods are not able to handle the problems of outliers and HLP that may exist in the dataset from affecting their performances. In addition, the Local-WA-PLS and WA-PLS do not take into consideration the influence of some irrelevant variables in the model that might decrease their estimation accuracy. This has motivated the current study to propose another improvement to robustify the existing WA-PLS procedure. Our strategies were to employ the weighting schemes that are resistant to outliers and HLP and preserve the contribution of the most relevant variables in the fitted model. The utilization of the robust PLSR [36] is incorporated in the establishment of the proposed procedures.
The main objectives of this study are: (1) to establish an improved procedure for the automated fitting process in the PLSR model known as the Robust Reliable Weighted Average PLS (RRWA-PLS). This proposed method is expected to be less sensitive to the selection of the optimal number of PLS components; (2) to evaluate the performance of the proposed RRWA-PLS method with the classical PLSR using optimal number of PLS components, WA-PLS, and a slight modification method in WA-PLS using a robust weight procedure called MWA-PLS; (3) to apply the proposed method on the artificial data and NIR spectra of oil palm (Elaeis guineensis Jacq.) fruit mesocarp (fresh and dried ground). This study provides a significant contribution to the development of process control, particularly for research methodology in the vibrational spectroscopy area.

2. Materials and Methods

2.1. Partial Least Squares Regression

The PLSR model [14] is an iterative procedure of the multivariate statistical method. The method is used to derive m original predictor variables X that may have a multicollinearity problem into smaller uncorrelated l new variables called components. The PLSR constructs a regression model using the new components against its response variable y through covariance structure of these two spaces. In chemometric analysis, the PLSR has been widely used for dimensional reduction of high dimensionality problem in the NIR spectral dataset (see [37,38]). In this study, we limited the study only in the case of n > > m , where n refers to the number of observations, and m represents the number of predictor variables.
Let us define a multiple regression model which consists of two different sets of multiple predictor X and a single response y ,
y = X   b   +   e
where y , e are n × 1 vector; X is n × m matrix; and b is m × 1 vector. Since the dataset contains high dimension of m predictors, there will be an infinite number of solution for estimator b . Considering X T X is singular, it does not meet the usual trivial theorem on rank in the classical regression. To overcome this, new latent variables need to be produced by summarizing the covariance between predictor X and response variable associating to the center values of these two sets [39].
Initializing a starting score vector of u from the single y ; there exists an outer relation for predictor X in Equation (1) as
X = V P T + E
where V is a n   ×   l (for l m ) matrix of the n × 1 vector v g { v g = ( X   w j ) / ( w j T w j ) }   g = 1   l ; and v g is the n × 1 column vector of scores x j in X . The P is a m × l matrix consisting column vector of loading { p g = ( X T   v g ) / ( v g T v g ) } g = 1 l . The w j { w j = ( X T u ) / ( u T u ) }   j = 1   m is a m × 1 vector of weight for X and E is a n × m matrix of residual in outer relation for X . In addition, there is a linear inner relation between the X and y block scores, calculated as { u =   b g   v g ,   b g = u T   v g / ( v g T   v g ) }   g = 1 l or written as
u   =   V   b i n n e r + g
with b i n n e r is a l × 1 vector of regression coefficient as the solution using Ordinary Least Square (OLS) on the decomposition of vector u , and g is n × 1 vector of residual in the inner relation. Following the Nonlinear Iterative Partial Least Squares (NIPALS) algorithm (see [14]), the mixed relation in the PLSR model can be defined as
y   =   X   b P L S R + f
where b P L S R = W   ( P T W ) 1 a is m × 1 vector coefficient; a represents l × 1 vector coefficient which is a = V T y ; and f denotes n × 1 vector of residual in mixed relation that has to be minimized. The estimator for parameter b P L S R is given as
b ^ P L S R = X T u   ( V T X   X T u ) 1 V T y ,       b ^ P L S R m × 1
with b ^ P L S R denotes m dimensional vector of regression coefficient in PLSR.

2.2. Partial Robust M-Regression

An alternative robust version of PLSR introduced by Serneel et al. [36] is the partial robust M-regression or simply known as PRM-Regression. The method assigns a generalized weight function w i using a modified robust M-estimate [40]. This weight is obtained from the iterative reweighting scheme (see [41]) to identify the outliers and HLP, both in each observation and score vector v i . Let us consider the m regression in Equation (1), for 1 i n , the least square estimator of b is defined as
b ^ L S = arg   min b ( i = 1 n ( y i x i   b ) 2 )
The least square is optimal if E   ( e ) = 0 and V a r   ( e ) = 1 or where e ~ N (   0   ,   1   ) ; otherwise, it fails to satisfy the normal assumption. When it does not satisfy this assumption, the least square losses its optimality; hence, a robust estimator such as M-estimates results in a better solution.
In Serneels et al. [36], the robust M-estimates reestablish the squares term into u giving
b ^ M = arg   min   b ( i = 1 n θ ( y i x i b ) )
where θ   ( u ) = u 2 , θ   ( y i x i b ) = ( y i x i   b ) 2 as θ   ( u ) is defined to be loss function which is symmetric and nondecreasing. Recall the e as residual n × 1 column vector { e i = y i x i   b } i = 1 n related to Equation (7), then b ^ M = arg   min b ( i = 1 n θ   ( e i ) ) . Using partial derivative and following the iterative reweighting scheme, there exists a weight in each observation as w i r = θ   ( e i ) / e i 2 , taking θ   ( e i ) = w i r   e i 2 , the Equation (7) can be rewritten as
b ^ M = arg   min b ( i = 1 n w i r   e i 2 )
It is considered that the weight in Equation (8) is only sensitive to the vertical outlier as improvement of another weight w i x is added to identify the leverage points. The criteria w i x 0 would be identified as the leverage points. The modified final estimator in Equation (8) is given as
b ^ R M = arg   min b ( i = 1 n w i r   w i x   e i 2 )
where w i = w i r   w i x is the generalized weight. Replacing the residual in Equation (9) with n × 1 vector of residual in Equation (4), then giving the solution of the partial robust M-regression as
b ^ P R M = arg   min b   ( i = 1 n w i r   w i x   f i 2 )
with the weights w i r and w i x are given as
w i r = f   ( f i σ ^ , c )
where σ ^ uses the robust MAD   ( f 1 , , f n ) = median i | f i median   f j | j , f   ( z , c ) is the weight function of iterative reweighting.
w i x = f ( | | v i med L 1 ( V ) | | median | | v i med L 1 ( V ) | | , c )
| | | | is Euclidean norm; med L 1 ( V ) is a robust estimator of the center of the l dimensional score vectors; and v i = ( v i , 1 , , v i , l ) T is the vector of component score matrix V that needs to be estimated. The fair weight function in f   ( z , c ) is preferred instead of other weights.

2.3. Weighted Average PLS

The WA-PLS method was introduced by Zhang [35] to encounter the sensitivity of PLSR toward the specific number of PLS components used. The method applies the averaging strategy to accommodate the whole possible complexity of the model. This complexity means that some models were initiated based on the increase from the r th to the s th number of PLS components used in the fitting model. Instead of applying the same weight in each PLSR model, the WA-PLS proposes different weights w r using variance weighting to each coefficient b P L S R = [ b 1 , b 2 , , b m ] in the d PLSR model { d = s r } with the complexity of r .
w r = 1 R M S E C V r
where the Root Mean Square Error Cross Validation (RMSECV) in each different number of r th PLS components is calculated as
R M S E C V r = 1 n i =   1 n   ( y i y ^ \ i , r ) 2
where y ^ \ i , r is the predicted value of the actual value of y i using the fitted model which is built without sample i and is under the complexity of r . The WA-PLS formula using weight and average from r th to the s th number of PLS components can then be written as
y ^ ¯ W A P L S   ( r ,   s ) = [ w r b 0 , r + + w s b 0 , s w r + + w s ]   + [ w r b 1 , r + + w s b 1 , s w r + + w s ]   x 1 + [ w r b 2 , r + + w s b 2 , s w r + + w s ]   x 2 + + [ w r b m , r + + w s b m , s w r + + w s ]   x m

3. Robust Reliable Weighted Average

Following Zhang’s et al. [35] weighted average calculation on each coefficient of d different numbers of PLS components, a robust version of the modified weighted average is developed. The method is called the Robust Reliable Weighted Average (RRWA) which accommodates two weights ( w r , c j ) in the calculation of the PLSR model. It is expected that by assigning the weighted average method in the PLSR model, the model becomes less sensitive to the number of PLS components used.
In the first weight w r , the calculation uses the Standard Error Prediction (SEP) which is done iteratively based on the re-sampling procedure of k -fold cross validation by splitting a dataset into k -subsets [42]. This procedure is the most used approach to retrieve a good estimate of error rate in the model selection. Nonetheless, it is anticipated that 20% of the highest absolute values of residuals may still be included in the calculation of w r . In order to remove those residuals, the trimmed version (20%) SEP from the cross validation ( t r i m m e d   S E P C V r ) is applied. The assigned weight w r to each coefficient of d different numbers of PLS components is calculated as
w r = 1 t r i m m e d   S E P C V r
where the t r i m m e d   S E P C V r values are calculated using the collection of the S E P r from k -subsets starting from r th to the s th number of PLS components. The calculation for S E P r is given as
S E P r = 1 n 1 i = 1 n ( e i r   e ¯ r ) 2
where e i r is the residual from predicted value of y ^ \ i , r and actual value of y i with the complexity of r , and e ¯ r is the arithmetic mean of the residuals. It corresponds to the M S E P r = S E P r 2 +   e ¯ r 2 where the bias is identically equal to 0, then the M S E P r is equals to S E P r 2 . While the bias is identically (almost) zero, the squared root of M S E P r which is
R M S E P r = 1 n i = 1 n e i r 2
is (almost) equal to the S E P r . This alternative weight could be called as a modified weight in WA-PLS, and is simply denoted as the MWA-PLS method which is also included as an alternative proposed method in this study.
In the classical WA-PLS, the number of possible irrelevant variables is still involved in the model. Eliminating these irrelevant variables would result in under or over fitting. Here, a downgrading procedure by assigning the second weight c j to each variable in terms of reliability values [21] is proposed. The procedure is based on the PLSR coefficient that is applicable to increase the contribution of most relevant variables in the model, and downgrade the irrelevant variables. The reliability of each variable c j is obtained by
c j = median   ( b j , r , , b j , s ) MAD   ( b j , r , , b j , s )
where the calculation is based on the robust measure of central tendency and the robust measure of variability on each j th WA-PLS coefficient from r th to the s th numbers of PLS components. The robust weight w r in Equation (16) is preferred instead of the weight in Equation (13). In relation to the PLSR model, this reliability value c j is converted into a diagonal matrix with size m × m . This diagonal matrix Ω { Ω = diag ( c 1 , c 2 , , c m ) } s where Ω is then used to transform the original input X variables into the scaled input variables X ˜ { X ˜ = X   Ω } for the RRWA-PLS model.
To prevent the influence of outliers and HLP that may exist in the NIR spectral dataset, the calculation of t r i m m e d   S E P C V r and reliability values are based on the PRM regression coefficient through a cross-validation procedure. The proposed modification of the WA-PLS known as the RRWA-PLS can be rewritten as
y ^ ¯ R R W A P L S   ( r ,   s ) = [ w r b 0 , r + + w s b 0 , s w r + + w s ]   + [ w r b 1 , r + + w s b 1 , s w r + + w s ]   x ˜ 1 + [ w r b 2 , r + + w s b 2 , s w r + + w s ]   x ˜ 2 + + [ w r b m , r + + w s b m , s w r + + w s ]   x ˜ m
where b j is the RRWA-PLS coefficient using the scaled input variables X ˜ .

4. Monte Carlo Simulation Study

To examine the performance of the proposed RRWA-PLS and to compare its performance with the classical WA-PLS and MWA-PLS, a study using the Monte Carlo simulation was carried out. Following a simulation study by Kim [43], an artificial dataset which contained added noise that follows the Normal distribution was randomly generated using a Uniform distribution and included. This dataset was then applied in the linear combination equation with different scenarios. Three sample sizes ( n   = 60, 200, 400), three levels of numbers of predictor variables ( m   = 41, 101, 201), three levels of relevant variables ( I V   = 0.1, 0.3, 0.5), and three different levels of outliers and high leverage points ( α   = 0.00, 0.05, 0.20) were considered. The 100 ( I V ) % of the predictor variables were randomly selected as relevant variables, and the remaining were considered as less relevant. The formulation of this simulation can be defined as follows:
m = m o + m e c j o   ~   U   ( 1 ,   10 ) ( j o = 1 ,   2 ,   ,   m o ) c e j e   ~   U   ( 5 ,   20 ) ( j e = 1 ,   2 ,   ,   m e ) e j   ~   N ( 0 ,   1 ) ( j = 0 ,   1 ,   2 , ,   m ) b   ~ U ( 0 , 7 ) i v   =   {   i v 1 , i v 2 , ,   i v 100 ( I V ) % m   } X   = {   c j o , c e j e   } + e j ( j = 1 ,   2 , ,   m ; j o = 1 ,   2 ,   ,   m o ; j e = 1 ,   2 ,   ,   m e ) y   =   X   b   + e 0 ( i = 1 ,   2 , ,   n ; j = i v 1 , i v 2 , ,   i v 100 ( I V ) % m )
where m is the total number of predictors used; m o is the number of observable variables; and the m e {   m e = ( m 100   ( I V ) % m ) / 2 } is the number of artificial noise variable. These artificial variables are applied to evaluate the stability of the methods. The c j o follows the Uniform distribution (1,10) with size n . The artificial noise variables c e j e are added to the predictor and follow the Uniform distribution (5,20) with size n . This c e j e is classified as an irrelevant variable. The e j follows the standard normal distribution with size n , and b represents a vector coefficient for selected relevant variables which follows the Uniform distribution (0,7) with size m . The c j o , c e j e and e j are independent of each other. The i v is the set of selected relevant variables in m o , and e 0 is the added error in the linear combination of y . X and y are illustrated as observable variables. The high leverage points in the X dimensions are created by generating c j o following the Uniform distribution (1,10) with size n . Corresponding to the vertical outlier, if the observation is considered as an outlier, b follows the Uniform distribution (0,2) with size 100   ( I V ) % m ; otherwise, it is considered as high leverage points and b follows the Uniform distribution (1,7) with size 100   ( I V ) % m . The different ranges applied in the uniform distribution are used to fit the different scenarios according to the added artificial noise, vertical outliers, and high leverage points in the dataset. By default, the predictor and response variable should be centered and scaled before the analysis. In the PLSR model, the selection on optimal number of PLSR components used in the model fitting is very important to prevent the model from becoming over- or under-prediction.
To assess the performance of the methods, several statistical measures such as desirability indices are used: Root Mean Square Error (RMSE), Coefficient of Determination (R2), and Standard Error (SE). The RMSE measures the absolute error of the predicted model; R2 is the proportion of variation in the data summarized by the model and indicates the reliability of the goodness of fit for model; and SE measures the uncertainty in the prediction. Here, the RPD parameter has no more used because it is not different than R2 to classify the model is poor or not [44]. Using the classical PLSR, the RMSECV which is the RMSE obtained through cross-validation, is calculated, along with the increasing number of PLS components. The RMSEP value is the RMSE obtained using the fitted model. In the simulation study, the maximum number of PLS components used was limited up to 20. Some different scenarios were applied to see the stability of classical PLSR model based on sample size, number of predictors, number of important variables, and the contamination of outlier and high leverage points in the dataset. In Figure 1, with no contamination in the data it can be seen that using small sample size ( n = 60), small number of predictors ( m = 41), and 10% relevant variable ( I V = 10%) the discrepancy between RMSECV and RMSEP is about two to five times. While using higher number of predictors ( m = 101) the discrepancy then become larger. Another scenario using bigger sample size ( n = 200), small number of predictors ( m = 41), and 30% relevant variable ( I V = 30%) the discrepancy between RMSECV and RMSEP relatively smaller. While using higher number of predictors ( m = 101) the discrepancy increases about two times. This shows that the classical PLS become instable and loss it accuracy when the number of sample size is small and number of predictor higher than sample size. In addition, with less number of relevant variable in the predictor variable also impacts to decrease the model accuracy. Using bigger sample size (for example n = 200) as the number of PLS components increases the discrepancy between RMSECV and RMSEP become smaller hence improve the model accuracy and reliability. The rule is the gap between RMSEC and RMSEP values should very small and close to 0. This condition guarantees the reliability of the calibrated model and prevent the model becomes over-under fitting.
The stability of the classical PLSR model then is evaluated by introducing the presence of outlier and leverage points in the dataset (see Figure 2). According to the scenarios given, the classical PLSR model failed to converge even using higher number of PLS components. This can be investigated through RMSECV values which become large and fail to be minimum. In addition, the discrepancy between both RMSECV and RMSEP values also large. This gives evidence that the presence of outlier and HLP in the dataset will destroy the convergence and results to the poor model fitting.
In the proposed RRWA-PLS, the 20% trimmed SEP was used to calculate the weight by removing the 20% highest absolute residual. This procedure is suggested to produce the robust weight instead of using the whole residual. In the calculation of trimmed SEP in each PLS component, using the cross validation procedure the median is preferred. In general, using different dataset scenarios with contamination of outlier and HLP (see Figure 3) the proposed robust trimmed SEP median succeed to remove the influence by removing 20% highest absolute residual. The SEP mean is suffered both with small ( n = 60) and bigger ( n = 200) sample size due to the contamination. This results in the SEP values of SEP mean becomes two to four times greater than trimmed SEP median. The SEP median lost its advantage when bigger sample size ( n = 200) is used. This results in the SEP values of SEP median becomes four times greater than trimmed SEP median. The SEP values using trimmed SEP median is lower than trimmed SEP mean thus improves model accuracy. This proves the robustness of the trimmed SEP median in weight calculation which irrespective of sample size, number of important variables, and percentage of contamination of outlier and HLP in the dataset.
It is very important to compare the weighting schemes between the WA-PLS and RRWA-PLS. This weight provides the contribution of predictors based on the aggregation of the PLS components used in the model. In Figure 4, the mean weights of the methods are also shown to illustrate two conditions: no contamination and with contamination of outlier and high leverage points. For no contamination, the weights in both WA-PLS and RRWA-PLS methods increase as the number of PLS component increases. The weight of RRWA-PLS is relatively smaller than that of the WA-PLS. In cases where the number of PLS components are greater than 10, the weight in both methods are not so much affected by the increasing number of PLS used in the model. On the other hand, when contaminated with outlier and HLP, the weights in both WA-PLS and RRWA-PLS methods decrease as the number of PLS component increases. Based on these scenarios, the WA-PLS still produces higher RMSE than RRWA-PLS. In general, according to the less weight value used in the model, the RRWA-PLS method is still superior and more efficient than the WA-PLS.
In Figure 5, the prediction accuracy of the methods is evaluated through their RMSEP values. To get a better illustration, the maximum number of PLS components was limited to 15 components. With no contamination of outlier and HLP in the dataset, in the first 6 number of PLS components, the RRWA-PLS is less efficient than the classical PLS and WA-PLS. However, as the number of PLS component increases up to 15, the RRWA-PLS is comparable to the classical PLS and WA-PLS. The proposed RRWA-PLS shows its robustness when the contaminations of outlier and HLP exist in the dataset. It has succeeded to prevent the influence of the outlier and HLP during model fitting. On the other hand, the classical PLS and WA-PLS suffer from the influence of outlier and HLP both in low and high level percentage of contamination, resulting in poor accuracy.
To further evaluate the methods, the Monte Carlo simulation was run 10,000 times on different dataset scenarios. The results, based on the average of statistical measures, are shown in Table 1. As mentioned earlier, in the fitting process, the number of PLS components used in the proposed methods was limited to 15. We use the term “PLS with opt.” to refer to the classical PLS with optimal number of PLS component selected through the “onesigma” approach and cross-validation. We also include a weight improvement procedure in the WA-PLS known as MWA-PLS. The MWA-PLS uses the robust weight version in the RRWA-PLS to replace the non-robust weight in WA-PLS. Based on the results, with no outliers and HLP in the dataset, the non-robust PLSR coupled with optimal components and WA-PLS are comparable to the MWA-PLS and RRWA-PLS. On the other hand, in the presence of outliers and HLP, the proposed RRWA-PLS method is superior to the classical PLS, WA-PLS, and MWA-PLS. Replacing the weight in the WA-PLS with the weight of the robust version improves the model accuracy with lower SE and better R2 values. The classical PLS fails to find the optimal number of PLS components due to the influence of 5–10% contamination of outliers and HLP during the fitting process. The WA-PLS also fails to fit the predicted model due to the impact of the contamination. The proposed RRWA-PLS consistently has the lowest RMSE, SE, and better R2 compared to the other methods, irrespective of the sample sizes, number of important variables, and percentages of contamination of outliers and HLP in the dataset.
The prediction ability of the methods using the contamination data was evaluated by plotting the predicted values against the actual values (see Figure 6). The classical PLS and WA-PLS suffered from the contamination of outliers and HLP in the dataset, which resulted in a poor prediction. This is because the PLSR estimator is not resistant to the contamination hence, biasing the estimated model. The MWA-PLS and proposed RRWA-PLS are completely free from the impact of outliers and HLP in the dataset. The influential observations are removed far from the fitting line, while good observations are closed to the fitted regression line. The prediction ability in RRWA-PLS is better than the MWA-PLS; the method ensures the best prediction capabilities with better accuracy than the other methods. The RRWA-PLS shows its robustness which is not affected by the inclusion of model with the number of PLS component used and is resistant to the influence of outliers and HLP.

5. NIR Spectral Dataset

NIR spectral data from oil palm fruit mesocarp were collected to evaluate the methods. The spectral data use light absorbance in each j wavelength bands adopted from Beer-Lambert Law [6], and the data are presented in m × 1 column vector x j using the log base 10. The spectral measurement was performed by scanning (in contact) the fruit mesocarp using a Portable Handheld NIR spectrometer, QualitySpec Trek, from Analytical Spectral Devices (ASD Inc., Boulder, Colorado (CO), USA). A total of 80 fruit bunches were harvested from the site of breeding trial in Palapa Estate, PT. Ivomas Tunggal, Riau Province, Indonesia. There were 12 fruit mesocarp samples in a bunch collected from different sampling positions. The sampling positions comprised the vertical and horizontal lines in a bunch (see [23]): bottom-front, bottom-left, bottom-back, bottom-right, equator-front, equator-left, equator-back, equator-right, top-front, top-left, top-back, and top-right. Right after collection, the fruit mesocarp samples were sent immediately to the laboratory for spectral measurement and wet chemistry analysis. The source of variability such as planting materials (Dami Mas, Clone, Benin, Cameroon, Angola, Colombia), planting year (2010, 2011, 2012) and ripeness level (unripe, under ripe, ripe, over ripe) were also considered to cover the different sources of variation in the palm population as much as possible.
Two sets of NIR spectral data with different sample properties, the fresh fruit mesocarp and dried ground mesocarp, were used in the study. The average of three spectra measurement on each fruit sample mesocarp was used in the computation. The fresh fruit mesocarp was used to estimate the percentage of Oil to Dry Mesocarp (%ODM) and percentage of Oil to Wet Mesocarp (%OWM), while the dried ground mesocarp was used to estimate the percentage of Fat Fatty Acids (%FFA). These parameters were analyzed through conventional analytical chemistry that adopts standard test methods from the Palm Oil Research Institute of Malaysia (PORIM) [45,46]. The %ODM was calculated in dry matter basis, which removes the weight of water content, while the %OWM used wet matter basis. Statistically, the distribution range of %ODM used as dataset is 56.38–86.9%; the %OWM is 19.75–64.81%, and the %FFA is 0.17–6.3%. The NIR spectra on oil palm fruit mesocarp (both in fresh and dried ground mesocarp) and its frequency distribution on response variables, the %ODM, %OWM, and %FFA, can be seen in the previous study (see [23]). It is important to note that no prior knowledge on whether or not outliers and high leverage points are present in this dataset. The discussions were therefore, addressed to evaluate the methods based on their accuracy improvement through its desirability index.

5.1. Oil to Dry Mesocarp

A total of 960 observations which comprised 488 wavelengths (range 550–2500 nm: 4 nm interval) of NIR spectral of fresh fruit mesocarp were used in this study. Following a prior procedure, the cross validation scheme was employed to obtain the RMSECV value in parallel to the increasing number of PLS components. To evaluate the RMSE values both in fitting and prediction ability performance, the scree plot is presented in Figure 7. This plot is essential to observe when the slope starts leveling off and illustrate the gap difference between the RMSECV and RMSEP values. The maximum number of PLS components was limited to 30 for computation efficiency purpose.
As seen in Figure 7, the stages of where the slope starts leveling off are at 7, 16, and 26 PLS components. The gap difference between the RMSECV and RMSEP values is wider after 26 PLS components, but both errors gradually become smaller. A larger discrepancy between the values indicates an over fitted which decreases the model accuracy. This indicates that despite using higher components, improvement in the accuracy is not guaranteed.
The mean weights of the fitted PLS both using WA-PLS and RRWA-PLS are plotted in Figure 8. It can be seen that the weight of the WA-PLS rapidly increases as the number of PLS components increases. This shows that using a higher number of PLS components improves accuracy. In RRWA-PLS, the weights are relatively comparable as the PLS components increases. It is interesting to observe that by using the weight strategy in RRWA-PLS, some components show lower mean weights compared to the others even though they have less and higher PLS components. For instance, applying 2 and 5 PLS components results in the signal for under fitting while applying 35 and 45 PLS components results in over fitting. The weighting scheme in the WA-PLS and RRWA-PLS depends on the number of PLS components used in the PLSR model. In fact, using a higher number of PLS components may risk in the inclusion of more noise, yielding a larger variation in the predicted model. The WA-PLS is known to be only suitable in preventing a large regression coefficient which indicates an over fitting. Through its corrected weights using reliability values, the RRWA-PLS does not only prevent over fitting but also under fitting.
As seen in Figure 9, the prediction ability error between the classical PLS, WA-PLS, and RRWA-PLS are comparable to each other. The first minimal RMSEP was obtained with 7 PLS components. After the 7 PLS components, the WA-PLS produced a higher RMSEP than the classical PLS and RRWA-PLS. The second minimal RMSEP was obtained with 16 PLS components, and the third minimal RMSEP was obtained with 26 PLS components. The classical PLS and RRWA-PLS have similar prediction ability error with 4, 9, 27, 28, 29, and 30 PLS components used in the PLSR model. In general, the RMSEP values using RRWA-PLS method are always within the range and fall around the classical PLS. The RMSEP curve decreases slightly up to 30 PLS components. In an industrial application, the number of factor greater than 30 PLS components is not recommended. Although this would yield better prediction ability, it is computationally intensive.
The performance of WA-PLS is not better than its modified weight in MWA-PLS (see Table 2). However, the accuracy of MWA-PLS is still lower than that of the RRWA-PLS. This is due to the weight in the WA-PLS which is not able to capture the reliability of the predictor variables. Comparing the prediction ability in the classical PLS with optimum PLS component (27), the RRWA-PLS is still superior. To prevent the influence of noise to the final model, we eliminated the first PLS component from the RRWA-PLS model. This is due to the fact that the first PLS component is usually less accurate if it is still included in the procedure.

5.2. Oil to Wet Mesocarp

In this section, the %OWM is considered as the response variable of the NIR spectral fresh fruit mesocarp dataset. The evaluation on the RMSE values, both in fitting and prediction ability, is presented in the scree plot. The maximum number of PLS components was limited to 30 for computation efficiency purposes. As seen in Figure 10, the slope of scree plot starts leveling off at 7, 16, and 22 PLS components. The gap difference between the RMSECV and RMSEP values are wider after the 22 PLS components. Contrarily, even though both errors become slightly smaller, a large difference between the RMSECV and RMSEP would lead to over fitting and make the predicted model unstable.
With the increasing number of PLS components, the mean weights of both WA-PLS and RRWA-PLS also increase (see Figure 11). The mean weights of WA-PLS method are comparably higher to the weight in the RRWA-PLS where accuracy is improved by employing a higher number of PLS components. There are some components in the RRWA-PLS with mean weights lower than those of other PLS components although they have a higher number of components. Applying 2 and 5 PLS components results under fitting, while applying 26 and 29 PLS components results in over fitting. The RRWA-PLS shows its robustness which is not dependent on the increasing number of PLS components used. Its weighing scheme is based on the selection of the relevant aggregate number of PLS components used as factors in the PLSR model. The most relevant PLS components will get a higher weight, while the less relevant will obtain a lower weight.
Figure 12 indicates that the prediction ability of the three methods using the first 5 components is fairly close to each other, but afterwards their performances seem to be different in terms of accuracy. The first minimal RMSEP is obtained with 8 PLS components. After the 8 PLS components, the WA-PLS produces higher RMSEP than the classical PLS and RRWA-PLS. The second minimal RMSEP is obtained with 14 PLS components, and the third minimal RMSEP is obtained with 23 PLS components. The classical PLS with 15 to 22 PLS components produces lower RMSEP values; however, after 24 PLS components, the accuracy between RRWA-PLS and classical PLS becomes closer. In general, the RMSEP values using RRWA-PLS method are always within the range, and the values are reasonably close to the classical PLS. The RMSEP curves slightly decrease which begin from 17 to 30 PLS components. The WA-PLS relatively has low accuracy compared to the RRWA-PLS and classical PLS. The WA-PLS suffers from over-under fitting due to several irrelevant variables, but it may still possibly be included in the fitting process.
Using its optimum at 22 PLS components, the classical PLS with the optimum number of PLS components is indeed inconsistent and sensitive to the number of PLS components used. By comparing the RMSE, R2, and SE values in Table 3, it can be concluded that the proposed RRWA-PLS produces better accuracy than the other methods. The modified weight in MWA-PLS has improved the accuracy of the predicted model; however, it cannot outperform the RRWA-PLS. The robust weighted-average strategy prevents the PLSR model from depending on the specific number of PLS components used in the fitting process.

5.3. Fat Fatty Acids

The NIR spectral of dried ground mesocarp with a total of 839 observations and 500 wavelengths (range 500–2500 nm: 4 nm interval) were utilized as predictor variables. Here, the %FFA was used as the response variable. In the scree plot (Figure 13), the RMSECV and RMSEP curves gradually decrease when the number of PLS components increases. Within the first 10 PLS components, the gap difference between RMSECV and RMSEP is small, but after 10 PLS components, the gap difference starts to increase continuously. The slope of the scree plot starts leveling off at the 6, 16, 22, and 27 PLS components. The gap difference between the RMSECV and RMSEP values becomes wider after 16 PLS components. Therefore, the use of specific number of PLS components affects the accuracy of the fitted model.
The mean weights of both WA-PLS and RRWA-PLS increase as the number of PLS component (see Figure 14) increases. Using %FFA dataset, the weight of RRWA-PLS is higher than that of the WA-PLS. The mean weights of the WA-PLS method increase more steeply as the number of PLS components increases. This indicates that the predicted model tends to be over fitting. The weight of the RRWA-PLS is robust since its weight does not depend on the aggregation number of PLS components used, irrespective of the number of sample size and the number of important variables. Moreover, the weight is resistant to the influence of outliers and HLP that may exist in the dataset.
In the first 6 components, the prediction ability of the three methods is comparable to each other (see Figure 15). After 10 components, the WA-PLS has less accuracy than the classical PLS and the RRWA-PLS. The first minimal RMSEP is obtained at 8 PLS components; after 8 PLS components, the WA-PLS produces larger RMSEP than the classical PLS and the RRWA-PLS. The WA-PLS shows the worst performance using this %FFA dataset. The second minimal RMSEP is obtained at 17 PLS components, and the third minimal RMSEP is obtained at 27 PLS components. The RMSEP values using RRWA-PLS method are always within the range and close to the classical PLS. The RMSEP in the classical PLS is not robust when it comes to the number of PLS components used as using any selection methods to find the optimal number of PLS components to be used in the PLSR model will result in unstable results. The application of an improper method in the selection will produce a less accurate result. The solution in using the robust weighted average is then suggested as it is unnecessary to find the optimal components. This is the automated fitting process in the PLSR model.
In Table 4, the classical PLS really suffers from the model complexity used in the fitting process. Using the one-sigma heuristic method in component selection, the accuracy of the selected optimal number of PLS components is not better than the PLS with a higher number of components. This shows the weakness of using a specific number of PLS components in the PLSR model. The robust RRWA-PLS is free from the complexity of the aggregation number of PLS components used. As seen in Table 3, the WA-PLS has the worst performance compared to the MWA-PLS, classical PLS, and RRWA-PLS. The use of RRWA-PLS method is preferred to the classical PLS because it does not require the selection of an optimal number of PLS components to be used in the final PLSR model. In addition, the method offers better reliability of the goodness-of-fit for the model.

6. Reliability Values

A number of irrelevant variables most probably still exist in the dataset. If the PLSR method fails to screen and downgrade the contribution of these irrelevant variables, it might decrease the accuracy of the final fitted model. The use of RRWA-PLS on the artificial dataset (see Figure 16a) helps the method to screen the most relevant variables and downgrade the irrelevant variables in the dataset successfully. The use of NIR spectral data with different response variables (%ODM, %OWM, and %FFA) has allowed the method to show its potential in the wavelength selection process. The method highlights the most relevant wavelengths and downgrades the influence of irrelevant wavelengths based on spectra absorption (see Figure 16b–d). The reliability values are important in order to increase the computational speed in the fitting process, improve the accuracy, and provide better interpretation of the NIR spectral dataset.

7. Conclusions

This study has shown the robustness in the chemometric analysis of NIR spectral data related to the aggregate number of PLS components and the resistance against outliers and HLP. The rich and abundant information in the NIR spectral requires advanced chemometric analysis to classify the most and least relevant wavelengths used in computation. Based on the results, the proposed RRWA-PLS method is the most preferred method compared to other methods due to its robustness. The weight improvement in MWA-PLS gives a better solution in improving the accuracy and reliability of WA-PLS. In the selection of the optimal number of PLS components, the classical PLS still needs the re-computational process to determine a specific complexity each time the model is updated. The proposed RRWA-PLS shows its superiority in the improvement of weight and variable selection process. It is also resistant to the contamination of outliers and HLP in the dataset. In addition, the RRWA-PLS method offers a solution for automated fitting process in the PLSR model as it does not require the selection of the optimal number of PLS components unlike in the classical PLS.

Author Contributions

Conceptualization and methodology: D.D.S., H.M., J.A., M.S.M., J.-P.C.; Data Collection: D.D.S., H.M., J.-P.C.; Computational and Validation: H.M., J.A., M.S.M.; First draft preparation: D.D.S., H.M.; Writing up to review and editing: D.D.S., H.M., J.A., M.S.M., J.-P.C. All authors have read and agreed to the published version of the manuscript.

Funding

The present research was partially supported by the Universiti Putra Malaysia Grant under Putra Grant (GPB) with project number GPB/2018/9629700.

Acknowledgments

This work was supported by a research grant and scholarship from the Southeast Asian Regional Center for Graduate Study and Research in Agriculture (SEARCA). We are also grateful to SMARTRI, PT. SMART TBK for providing the portable handheld NIRS instrument, research site, and analytical laboratory services. We would like to thank Universiti Putra Malaysia for the journal publication fund support. Special thanks are also extended to all research staff and operator of SMARTRI for their cooperation and outstanding help with data collection.

Conflicts of Interest

The authors declare no conflict of interest.

Declaration

The results of this study were presented at the NIR 2019—the 19th biennial meeting of the International Council for NIR Spectroscopy (ICNIRS) held in Gold Coast, Queensland, Australia, from 15–20 September 2019. Some inputs and comments from the audiences and reviewers were included in this paper.

References

  1. Rodriguez-Saona, L.E.; Fry, F.S.; McLaughlin, M.A.; Calvey, E.M. Rapid analysis of sugars in fruit juices by FT-NIR spectroscopy. Carbohydr. Res. 2001, 336, 63–74. [Google Scholar] [CrossRef]
  2. Blanco, M.; Villarroya, I.N.I.R. NIR spectroscopy: A rapid-response analytical tool. Trends Anal. Chem. 2002, 21, 240–250. [Google Scholar] [CrossRef]
  3. Alander, J.T.; Bochko, V.; Martinkauppi, B.; Saranwong, S.; Mantere, T. A review of optical nondestructive visual and near-infrared methods for food quality and safety. Int. J. Spectrosc. 2013, 2013, 341402. [Google Scholar] [CrossRef]
  4. Lee, C.; Polari, J.J.; Kramer, K.E.; Wang, S.C. Near-Infrared (NIR) Spectrometry as a Fast and Reliable Tool for Fat and Moisture Analyses in Olives. ACS Omega. 2018, 3, 16081–16088. [Google Scholar] [CrossRef] [PubMed]
  5. Levasseur-Garcia, C. Updated overview of infrared spectroscopy methods for detecting mycotoxins on cereals (corn, wheat, and barley). Toxins 2018, 10, 38. [Google Scholar] [CrossRef] [Green Version]
  6. Stuart, B. Infrared Spectroscopy: Fundamentals and Applications; Wiley: Toronto, ON, Canada, 2004; pp. 167–185. [Google Scholar]
  7. Mark, H. Chemometrics in near-infrared spectroscopy. Anal. Chim. Acta 1989, 223, 75–93. [Google Scholar] [CrossRef]
  8. Cozzolino, D.; Morón, A. Potential of near-infrared reflectance spectroscopy and chemometrics to predict soil organic carbon fractions. Soil Tillage Res. 2006, 85, 78–85. [Google Scholar] [CrossRef]
  9. Roggo, Y.; Chalus, P.; Maurer, L.; Lema-Martinez, C.; Edmond, A.; Jent, N. A review of near infrared spectroscopy and chemometrics in pharmaceutical technologies. J. Pharm. Biomed. Anal. 2007, 44, 683–700. [Google Scholar] [CrossRef]
  10. Garthwaite, P.H. An interpretation of partial least squares. J. Am. Stat. Assoc. 1994, 89, 122–127. [Google Scholar] [CrossRef]
  11. Cozzolino, D.; Kwiatkowski, M.J.; Dambergs, R.G.; Cynkar, W.U.; Janik, L.J.; Skouroumounis, G.; Gishen, M. Analysis of elements in wine using near infrared spectroscopy and partial least squares regression. Talanta 2008, 74, 711–716. [Google Scholar] [CrossRef]
  12. McLeod, G.; Clelland, K.; Tapp, H.; Kemsley, E.K.; Wilson, R.H.; Poulter, G.; Coombs, D.; Hewitt, C.J. A comparison of variate pre-selection methods for use in partial least squares regression: A case study on NIR spectroscopy applied to monitoring beer fermentation. J. Food Eng. 2009, 90, 300–307. [Google Scholar] [CrossRef] [Green Version]
  13. Xu, L.; Cai, C.B.; Deng, D.H. Multivariate quality control solved by one-class partial least squares regression: Identification of adulterated peanut oils by mid-infrared spectroscopy. J. Chemom. 2011, 25, 568–574. [Google Scholar] [CrossRef]
  14. Wold, H. Model construction and evaluation when theoretical knowledge is scarce: Theory and application of partial least squares. In Evaluation of Econometric Models; Elsevier: Amsterdam, The Netherlands, 1980; pp. 47–74. [Google Scholar]
  15. Manne, R. Analysis of two partial-least-squares algorithms for multivariate calibration. Chemom. Intell. Lab. Syst. 1987, 2, 187–197. [Google Scholar] [CrossRef]
  16. Haenlein, M.; Kaplan, A.M. A beginner’s guide to partial least squares analysis. Understt. Satistics 2004, 3, 283–297. [Google Scholar] [CrossRef]
  17. Hubert, M.; Branden, K.V. Robust methods for partial least squares regression. J. Chemom. A J. Chemom. Soc. 2003, 17, 537–549. [Google Scholar] [CrossRef]
  18. Silalahi, D.D.; Midi, H.; Arasan, J.; Mustafa, M.S.; Caliman, J.P. Robust generalized multiplicative scatter correction algorithm on pretreatment of near infrared spectral data. Vib. Spectrosc. 2018, 97, 55–65. [Google Scholar] [CrossRef]
  19. Rosipal, R.; Trejo, L.J. Kernel partial least squares regression in reproducing kernel hilbert space. J. Mach. Learn. Res. 2001, 2, 97–123. [Google Scholar]
  20. Silalahi, D.D.; Midi, H.; Arasan, J.; Mustafa, M.S.; Caliman, J.P. Kernel partial diagnostic robust potential to handle high-dimensional and irregular data space on near infrared spectral data. Heliyon 2020, 6, e03176. [Google Scholar] [CrossRef]
  21. Centner, V.; Massart, D.L.; De Noord, O.E.; De Jong, S.; Vandeginste, B.M.; Sterna, C. Elimination of uninformative variables for multivariate calibration. Anal. Chem. 1996, 68, 3851–3858. [Google Scholar] [CrossRef]
  22. Mehmood, T.; Liland, K.H.; Snipen, L.; Sæbø, S. A review of variable selection methods in partial least squares regression. Chemom. Intell. Lab. Syst. 2012, 118, 62–69. [Google Scholar] [CrossRef]
  23. Silalahi, D.D.; Midi, H.; Arasan, J.; Mustafa, M.S.; Caliman, J.P. Robust Wavelength Selection Using Filter-Wrapper Method and Input Scaling on Near Infrared Spectral Data. Sensors 2020, 20, 5001. [Google Scholar] [CrossRef] [PubMed]
  24. Wiklund, S.; Nilsson, D.; Eriksson, L.; Sjöström, M.; Wold, S.; Faber, K. A randomization test for PLS component selection. J. Chemom. A J. Chemom. Soc. 2007, 21, 427–439. [Google Scholar] [CrossRef]
  25. Hastie, T.; Tibshirani, R.; Friedman, J. The elements of statistical learning: Data mining, inference, and prediction. In Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  26. Van Der Voet, H. Comparing the predictive accuracy of models using a simple randomization test. Chemom. Intell. Lab. Syst. 1994, 25, 313–323. [Google Scholar] [CrossRef]
  27. Efron, B. Bootstrap Methods: Another Look at the Jackknife. Annal. Stat. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  28. Gómez-Carracedo, M.P.; Andrade, J.M.; Rutledge, D.N.; Faber, N.M. Selecting the optimum number of partial least squares components for the calibration of attenuated total reflectance-mid-infrared spectra of undesigned kerosene samples. Anal. Chim. Acta 2007, 585, 253–265. [Google Scholar] [CrossRef] [PubMed]
  29. Tran, T.; Szymańska, E.; Gerretzen, J.; Buydens, L.; Afanador, N.L.; Blanchet, L. Weight randomization test for the selection of the number of components in PLS models. J. Chemom. 2017, 31, e2887. [Google Scholar] [CrossRef]
  30. Kvalheim, O.M.; Arneberg, R.; Grung, B.; Rajalahti, T. Determination of optimum number of components in partial least squares regression from distributions of the root-mean-squared error obtained by Monte Carlo resampling. J. Chemom. 2018, 32, e2993. [Google Scholar] [CrossRef]
  31. Shenk, J.S.; Westerhaus, M.O.; Berzaghi, P. Investigation of a LOCAL calibration procedure for near infrared instruments. J. Near Infrared Spectrosc. 1997, 5, 223–232. [Google Scholar] [CrossRef]
  32. Barton, F.E.; Shenk, J.S.; Westerhaus, M.O.; Funk, D.B. The development of near infrared wheat quality models by locally weighted regressions. J. Near Infrared Spectrosc. 2000, 8, 201–208. [Google Scholar] [CrossRef]
  33. Naes, T.; Isaksson, T.; Kowalski, B. Locally weighted regression and scatter correction for near-infrared reflectance data. Anal. Chem. 1990, 62, 664–673. [Google Scholar] [CrossRef]
  34. Dardenne, P.; Sinnaeve, G.; Baeten, V. Multivariate calibration and chemometrics for near infrared spectroscopy: Which method? J. Near Infrared Spectrosc. 2000, 8, 229–237. [Google Scholar] [CrossRef]
  35. Zhang, M.H.; Xu, Q.S.; Massart, D.L. Averaged and weighted average partial least squares. Anal. Chim. Acta 2004, 504, 279–289. [Google Scholar] [CrossRef]
  36. Serneels, S.; Croux, C.; Filzmoser, P.; Van Espen, P.J. Partial robust M-regression. Chemom. Intell. Lab. Syst. 2005, 79, 55–64. [Google Scholar] [CrossRef]
  37. Cui, C.; Fearn, T. Comparison of partial least squares regression, least squares support vector machines, and Gaussian process regression for a near infrared calibration. J. Near Infrared Spectrosc. 2017, 25, 5–14. [Google Scholar] [CrossRef]
  38. Song, W.; Wang, H.; Maguire, P.; Nibouche, O. Local Partial Least Square classifier in high dimensionality classification. Neurocomputing 2017, 234, 126–136. [Google Scholar] [CrossRef]
  39. Martens, H.; Naes, T. Multivariate Calibration; John Wiley & Sons: Hoboken, NJ, USA, 1992. [Google Scholar]
  40. Huber, P.J. Robust regression: Asymptotics, conjectures and Monte Carlo. Ann. Stat. 1973, 1, 799–821. [Google Scholar] [CrossRef]
  41. Cummins, D.J.; Andrews, C.W. Iteratively reweighted partial least squares: A performance analysis by Monte Carlo simulation. J. Chemom. 1995, 9, 489–507. [Google Scholar] [CrossRef]
  42. Fushiki, T. Estimation of prediction error by using K-fold cross-validation. Stat. Comput. 2011, 21, 137–146. [Google Scholar] [CrossRef]
  43. Kim, S.; Okajima, R.; Kano, M.; Hasebe, S. Development of soft-sensor using locally weighted PLS with adaptive similarity measure. Chemom. Intell. Lab. Syst. 2013, 124, 43–49. [Google Scholar] [CrossRef] [Green Version]
  44. Minasny, B.; McBratney, A. Why you don’t need to use RPD. Pedometron 2013, 33, 14–15. [Google Scholar]
  45. Siew, W.L.; Tan, Y.A.; Tang, T.S. Methods of Test for Palm Oil and Palm Oil Products: Compiled; Lin, S.W., Sue, T.T., Ai, T.Y., Eds.; Palm Oil Research Institute of Malaysia: Selangor, Malaysia, 1995.
  46. Rao, V.; Soh, A.C.; Corley, R.H.V.; Lee, C.H.; Rajanaidu, N. Critical Reexamination of the Method of Bunch Quality Analysis in Oil Palm Breeding; PORIM Occasional Paper; FAO: Rome, Italy, 1983; Available online: https://agris.fao.org/agris-search/search.do?recordID=US201302543052 (accessed on 13 October 2020).
Figure 1. The RMSECV and RMSEP of the classical PLSR on the simulated data with no contamination of outlier and high leverage points.
Figure 1. The RMSECV and RMSEP of the classical PLSR on the simulated data with no contamination of outlier and high leverage points.
Symmetry 12 02099 g001
Figure 2. The RMSECV and RMSEP of the classical PLSR on the simulated data with contamination of outlier and high leverage points.
Figure 2. The RMSECV and RMSEP of the classical PLSR on the simulated data with contamination of outlier and high leverage points.
Symmetry 12 02099 g002
Figure 3. SEP values in the RRWA-PLS using different approach on the simulated data with contamination of outlier and HLP.
Figure 3. SEP values in the RRWA-PLS using different approach on the simulated data with contamination of outlier and HLP.
Symmetry 12 02099 g003
Figure 4. The mean weights of the WA-PLS and RRWA-PLS on the simulated data with and without contamination of outlier and HLP.
Figure 4. The mean weights of the WA-PLS and RRWA-PLS on the simulated data with and without contamination of outlier and HLP.
Symmetry 12 02099 g004
Figure 5. The RMSEP values of the classical PLS, WA-PLS, and RRWA-PLS on the simulated data with and without contamination of outlier and HLP.
Figure 5. The RMSEP values of the classical PLS, WA-PLS, and RRWA-PLS on the simulated data with and without contamination of outlier and HLP.
Symmetry 12 02099 g005
Figure 6. Predicted against actual values on the simulated data using PLS with opt., WA-PLS, MWA-PLS, and RRWA-PLS.
Figure 6. Predicted against actual values on the simulated data using PLS with opt., WA-PLS, MWA-PLS, and RRWA-PLS.
Symmetry 12 02099 g006
Figure 7. The RMSE of the fitted PLSR through cross validation and the prediction ability using %ODM dataset.
Figure 7. The RMSE of the fitted PLSR through cross validation and the prediction ability using %ODM dataset.
Symmetry 12 02099 g007
Figure 8. The mean weights of the fitted PLSR in WA-PLS and RRWA-PLS methods using %ODM dataset.
Figure 8. The mean weights of the fitted PLSR in WA-PLS and RRWA-PLS methods using %ODM dataset.
Symmetry 12 02099 g008
Figure 9. The RMSEP values of classical PLS, WA-PLS, RRWA-PLS method using %ODM dataset.
Figure 9. The RMSEP values of classical PLS, WA-PLS, RRWA-PLS method using %ODM dataset.
Symmetry 12 02099 g009
Figure 10. The RMSE of the fitted PLSR through cross validation and the prediction ability using %OWM dataset.
Figure 10. The RMSE of the fitted PLSR through cross validation and the prediction ability using %OWM dataset.
Symmetry 12 02099 g010
Figure 11. The mean weights of the fitted PLSR in WA-PLS and RRWA-PLS methods using %OWM dataset.
Figure 11. The mean weights of the fitted PLSR in WA-PLS and RRWA-PLS methods using %OWM dataset.
Symmetry 12 02099 g011
Figure 12. The RMSEP values of classical PLS, WA-PLS, RRWA-PLS method using %OWM dataset.
Figure 12. The RMSEP values of classical PLS, WA-PLS, RRWA-PLS method using %OWM dataset.
Symmetry 12 02099 g012
Figure 13. The RMSE of the fitted PLSR through cross validation and the prediction ability using %FFA dataset.
Figure 13. The RMSE of the fitted PLSR through cross validation and the prediction ability using %FFA dataset.
Symmetry 12 02099 g013
Figure 14. The mean weights of the fitted PLSR in WA-PLS and RRWA-PLS methods using %FFA dataset.
Figure 14. The mean weights of the fitted PLSR in WA-PLS and RRWA-PLS methods using %FFA dataset.
Symmetry 12 02099 g014
Figure 15. The RMSEP values of classical PLS, WA-PLS, RRWA-PLS method using %FFA dataset.
Figure 15. The RMSEP values of classical PLS, WA-PLS, RRWA-PLS method using %FFA dataset.
Symmetry 12 02099 g015
Figure 16. Reliability values using RRWA-PLS method on different datasets: (a) artificial data; NIR spectral dataset: (b) %ODM; (c) %OWM; (d) %FFA.
Figure 16. Reliability values using RRWA-PLS method on different datasets: (a) artificial data; NIR spectral dataset: (b) %ODM; (c) %OWM; (d) %FFA.
Symmetry 12 02099 g016aSymmetry 12 02099 g016b
Table 1. The RMSE, R2, and SE in the weighted methods using the Monte Carlo Simulation with different dataset scenarios.
Table 1. The RMSE, R2, and SE in the weighted methods using the Monte Carlo Simulation with different dataset scenarios.
Outlier and HLPnmIVMethodsnPLSRMSER2SE
No outlier and HLP604110%PLS with opt.92.7520.9802.776
WA-PLS152.4960.9842.517
MWA-PLS153.3180.9723.305
RRWA-PLS152.4950.9832.497
6010110%PLS with opt.39.3480.9039.427
WA-PLS152.7590.9932.782
MWA-PLS158.1810.9318.250
RRWA-PLS152.7020.9602.708
6020110%PLS with opt.118.7170.85918.875
WA-PLS152.3330.9982.352
MWA-PLS155.5420.9085.543
RRWA-PLS152.4600.9842.480
2004130%PLS with opt.66.7070.9696.723
WA-PLS156.5320.9706.548
MWA-PLS156.7990.9686.816
RRWA-PLS156.5940.9706.610
20010130%PLS with opt.107.9260.9807.946
WA-PLS157.9150.9817.935
MWA-PLS1512.6210.95112.653
RRWA-PLS157.8600.9887.862
20020130%PLS with opt.912.9950.97313.028
WA-PLS159.2370.9889.260
MWA-PLS1515.1630.96515.201
RRWA-PLS159.5820.9859.601
4004150%PLS with opt.49.2130.9679.224
WA-PLS159.0620.9689.073
MWA-PLS159.5220.9659.534
RRWA-PLS159.1080.9689.109
40010150%PLS with opt.712.7270.97212.733
WA-PLS1512.6110.97312.627
MWA-PLS1518.8120.93918.836
RRWA-PLS1512.7870.97212.803
40020150%PLS with opt.1014.2440.98114.262
WA-PLS1514.3430.98114.361
MWA-PLS1531.0600.91031.099
RRWA-PLS1514.1530.98314.172
With outlier and HLP (5%)604110%PLS with opt.0N/AN/AN/A
WA-PLS1524.1390.86924.343
MWA-PLS153.1600.9753.188
RRWA-PLS153.0420.9763.069
6010110%PLS with opt.0N/AN/AN/A
WA-PLS1516.5590.89216.699
MWA-PLS159.1560.9319.241
RRWA-PLS155.0680.9845.116
6020110%PLS with opt.0N/AN/AN/A
WA-PLS1515.1560.99815.284
MWA-PLS159.5000.9369.591
RRWA-PLS158.5800.9738.662
2004130%PLS with opt.1151.3170.494151.697
WA-PLS15175.9590.603176.400
MWA-PLS156.4410.9706.458
RRWA-PLS156.2670.9716.284
20010130%PLS with opt.2331.6500.734332.482
WA-PLS15258.6140.835259.263
MWA-PLS1510.6790.96010.707
RRWA-PLS158.1950.9768.217
20020130%PLS with opt.1462.1500.855462.307
WA-PLS15226.5990.969227.167
MWA-PLS1517.7910.95217.839
RRWA-PLS1511.6020.97911.634
4004150%PLS with opt.2304.8430.516305.225
WA-PLS15336.5190.533336.941
MWA-PLS158.8410.9648.853
RRWA-PLS158.3830.9688.394
40010150%PLS with opt.2569.7270.718570.441
WA-PLS15537.1840.776537.857
MWA-PLS1517.6780.94117.702
RRWA-PLS1512.6640.97012.681
40020150%PLS with opt.2808.9640.836809.977
WA-PLS15620.3850.899621.161
MWA-PLS1529.3380.92029.377
RRWA-PLS1517.1630.97317.186
With outlier and HLP (20%)604110%PLS with opt.294.8960.71895.697
WA-PLS1572.6250.90373.238
MWA-PLS159.7310.8789.825
RRWA-PLS158.6890.8978.774
6010110%PLS with opt.2121.5980.872122.624
WA-PLS1529.4880.90529.737
MWA-PLS1512.7950.93212.924
RRWA-PLS1510.4880.93410.596
6020110%PLS with opt.2209.0760.721210.841
WA-PLS1526.2430.89926.464
MWA-PLS1527.2060.79127.496
RRWA-PLS1525.1450.87825.204
2004130%PLS with opt.1254.2900.719254.928
WA-PLS15237.9190.783238.516
MWA-PLS157.8480.9567.872
RRWA-PLS157.3830.9617.406
20010130%PLS with opt.1438.5040.855439.604
WA-PLS15353.1630.928354.049
MWA-PLS1516.8100.91116.863
RRWA-PLS1516.1050.92416.155
20020130%PLS with opt.2692.3020.792693.037
WA-PLS15294.9790.799295.719
MWA-PLS15121.8810.740122.262
RRWA-PLS1534.9820.89135.091
4004150%PLS with opt.1443.9790.740444.535
WA-PLS15396.4250.767396.921
MWA-PLS1510.3390.95710.356
RRWA-PLS1510.0590.95810.074
40010150%PLS with opt.1773.5580.865774.527
WA-PLS15655.8580.903656.679
MWA-PLS1523.2440.91223.281
RRWA-PLS1523.0660.91323.102
40020150%PLS with opt.1944.9860.792945.425
WA-PLS15803.5200.796804.526
MWA-PLS1540.6560.85940.720
RRWA-PLS1535.1210.89435.176
Note: nPLS is the number of optimal PLS components used in the PLSR model; PLS with opt. is the classical PLS with optimal number of PLS components.
Table 2. The RMSE, R2, and SE in the weighted methods using %ODM data.
Table 2. The RMSE, R2, and SE in the weighted methods using %ODM data.
DatasetMethodsnPLSRMSEPR2SE
%ODMPLS with opt.273.1390.6483.141
WA-PLS303.3160.6033.317
MWA-PLS303.3150.6443.317
RRWA-PLS303.0710.6613.072
Note: nPLS is the number of optimal PLS components used in the PLSR model; PLS with opt. is the classical PLS with optimal number of PLS components.
Table 3. The RMSE, R2, and SE in the weighted methods using %OWM data.
Table 3. The RMSE, R2, and SE in the weighted methods using %OWM data.
DatasetMethodsnPLSRMSEPR2SE
%OWMPLS with opt.224.4420.6684.444
WA-PLS304.5200.6724.522
MWA-PLS304.2390.7084.241
RRWA-PLS304.1850.7184.187
Note: nPLS is the number of optimal PLS components used in the PLSR model; PLS with opt. is the classical PLS with optimal number of PLS components.
Table 4. The RMSE, R2, and SE in the weighted methods using %FFA data.
Table 4. The RMSE, R2, and SE in the weighted methods using %FFA data.
DatasetMethodsnPLSRMSEPR2SE
%FFAPLS with opt.270.2870.7290.288
WA-PLS300.3240.6580.324
MWA-PLS300.3110.6830.312
RRWA-PLS300.2750.7470.276
Note: nPLS is the number of optimal PLS components used in the PLSR model; PLS with opt. is the classical PLS with optimal number of PLS component.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Silalahi, D.D.; Midi, H.; Arasan, J.; Mustafa, M.S.; Caliman, J.-P. Automated Fitting Process Using Robust Reliable Weighted Average on Near Infrared Spectral Data Analysis. Symmetry 2020, 12, 2099. https://doi.org/10.3390/sym12122099

AMA Style

Silalahi DD, Midi H, Arasan J, Mustafa MS, Caliman J-P. Automated Fitting Process Using Robust Reliable Weighted Average on Near Infrared Spectral Data Analysis. Symmetry. 2020; 12(12):2099. https://doi.org/10.3390/sym12122099

Chicago/Turabian Style

Silalahi, Divo Dharma, Habshah Midi, Jayanthi Arasan, Mohd Shafie Mustafa, and Jean-Pierre Caliman. 2020. "Automated Fitting Process Using Robust Reliable Weighted Average on Near Infrared Spectral Data Analysis" Symmetry 12, no. 12: 2099. https://doi.org/10.3390/sym12122099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop