4.1. Experimental Environment
The computer configuration used in this paper is Intel i5-7300 2.5GHz quad-core quad-thread CPU, 16G RAM, Windows 10 operating system, programming language python, programming environment pycharm, and libraries, such as numpy, sklearn, pandas, xgboost, lightgbm are used, respectively.
4.4. Analysis and Comparison of Optimization Results
Figure 6 shows the curve of the fitness value with the number of PSO iterations when the particle swarm optimization algorithm optimizes the parameters of XGBoost model, and
Figure 7 shows the curve of the fitness value with the number of PSO iterations when the particle swarm optimization algorithm optimizes the parameters of LightGBM model. From
Figure 6 and
Figure 7, it can be seen that the optimal combination of parameters for the XGBoost model can be searched when the number of iterations reaches about 10, and the optimal combination of parameters for the LightGBM model can be searched when the number of iterations reaches about 20.
The optimization results using PSO with the set termination iteration conditions are shown in
Table 5.
Based on the dataset, the model was trained using the optimized parameters, and the prediction results are shown in
Table 6.
As it can be seen in
Table 6, the prediction error of blood glucose was significantly reduced after hyperparameter optimization of the model using the particle swarm optimization algorithm. In terms of the MAPE metric, the prediction error of PSO-XGBoost is 0.92% lower than that of XGBoost, and the prediction error of PSO-LightGBM is 1.01% lower than that of LightGBM. In terms of RMSE metrics, the prediction error of PSO-XGBoost is 1.53% lower than that of XGBoost, and the prediction error of PSO-LightGBM is 4.54% lower than that of LightGBM. It can be seen that the particle swarm optimization algorithm can significantly improve the blood glucose prediction accuracy after hyperparameter optimization of the model.
4.5. Analysis and Comparison of Model Fusion Results
In this paper, the output results of the first layer XGBoost and LightGBM models are input into the second layer Bayesian regression model as a way to predict blood glucose values, and the prediction results are shown in
Table 7. In order to better verify the effect of the models, this paper compares XGBoost, LightGBM, CatBoost, Bayesian regression, linear regression, and Kellett algorithm, and evaluates the effect of each model by calculating the mean absolute percentage error and root mean square error.
As can be seen from the table, compared with the base learner, CatBoost, linear regression, Bayesian regression and Kellett algorithm, the blood glucose prediction method based on particle swarm optimization and model fusion proposed in this paper has a great advantage in prediction performance. In terms of MAPE metrics, the prediction effect improves by 1.76% and 2.07% over the base learner and 5.04% over the Kellett algorithm. In the RMSE metric, the prediction effect improves 1.29% and 4.98% over the base learner, and 4.77% over the Kellett algorithm. In terms of both metrics, the blood glucose prediction method proposed in this paper is much better than CatBoost, linear regression, Bayesian regression, and Kellett algorithm in terms of prediction effectiveness and generalization ability.
Figure 8 shows the prediction results of the base learner and the fusion method on the patient’s blood glucose values, and from
Figure 8, it can be seen that the fusion method proposed in this paper has the best prediction results, the LightGBM model has the worst prediction results, and the XGBoost model has the prediction results in between. Combined with the evaluation indexes in
Table 7, the MAPE of the fusion method is 13.01% and the RMSE is 23.15, the MAPE of the XGBoost model is 14.77% and the RMSE is 24.44, and the MAPE of the LightGBM model is 15.08% and the RMSE is 26.08. The values of MAPE and RMSE of the fusion method proposed in this paper are the smallest, and then the highest prediction accuracy of the fusion method, which is consistent with the prediction results in
Figure 8.
Figure 9 shows the prediction results of CatBoost, linear regression, Bayesian regression, Kellett algorithm, and fusion methods on patients’ blood glucose values. From
Figure 9, it can be seen that the fusion method proposed in this paper has the best prediction results, the CatBoost model has the second-best prediction effect, Kellett algorithm has the worst prediction effect, and linear regression and Bayesian regression have prediction results between them. Combined with the evaluation indexes in
Table 7, the MAPE of the fusion method is 13.01% and the RMSE is 23.15, the MAPE of the CatBoost model is 14.44% and the RMSE is 24.35, the MAPE of the Kellett algorithm is 18.05% and the RMSE is 27.92, and the MAPE of linear regression with Bayesian regression is 15.09% and the RMSE is 24.08, which is consistent with the predicted results in
Figure 9.
In general,
Figure 8 and
Figure 9 show the prediction results of each model for the first 50 bars of patients’ blood glucose in the test set compared with the true blood glucose values. From the
Figure 8 and
Figure 9, it can be seen that the prediction results of the stacking fusion model are very close to the true blood glucose values in most cases compared with XGBoost, LightGBM, CatBoost, Bayesian regression, linear regression, and Kellett algorithm, which have a better fit and prediction results. It can be seen that the blood glucose prediction method based on particle swarm optimization and fusion model proposed in this paper has certain superiority.
From the results in
Table 7, the prediction error of the XGBoost model is closer to that of the prediction method in this paper. In order to better show the comparison results, the prediction results are broken down by patients below.
Table 8 and
Figure 10 show the comparison results of the mean absolute percentage error of blood glucose prediction for 10 patients between the particle swarm optimization and model fusion-based blood glucose prediction method proposed in this paper and the XGBoost model.
As can be seen from
Table 8 and
Figure 10, the method proposed in this paper outperformed the XGBoost model in terms of blood glucose prediction for all 10 patients, with errors between 1% and 2% lower than the XGBoost model. Especially for the blood glucose prediction of patient No. 7, the two pulled apart a large gap. This shows that the method proposed in this paper is more accurate than the XGBoost model in prediction, has better generalization ability, and has certain superiority in predicting the blood glucose level of brand-new patients.