Utilizing Ensemble Learning Techniques to Enhance Corn Price Prediction: A Case Study on South Dakota
Abstract
1. Introduction
2. Literature Review
2.1. Traditional Machine Learning Approaches
2.2. Deep Learning Models: LSTM and Variations
2.3. Ensemble Learning in Price Prediction
2.4. Gaps and Limitations in Existing Research
3. Methodology
3.1. Data Collection and Pre-Processing
3.2. Base Models
- Long Short-Term Memory (LSTM): LSTM is a variant of Recurrent Neural Networks (RNN) specifically developed to address the problem of vanishing gradients encountered in traditional RNNs. LSTM networks incorporate special gating mechanisms—input, forget, and output gates, that allow the network to retain relevant historical information and eliminate unnecessary data. This enables effective modeling of long-term dependencies present in sequential data, which makes LSTM particularly suitable for the prediction of corn prices, where historical patterns significantly influence future values [23].
- Transformer Neural Networks: Introduced in [24], transformer models are based on self-attention mechanisms, which allow the model to dynamically weigh the importance of each data point in the input sequence. Unlike recurrent architectures, Transformers process sequences concurrently, enabling the capture of intricate temporal relationships without the sequential processing bottleneck. This characteristic makes Transformers highly effective at capturing complex dependencies and improving prediction accuracy for time-series data, such as corn prices.
3.3. Stacking and Meta-Learner
- Stacking Mechanism: Stacking works by training the base models independently on historical corn price data. The base models each produce predictions on a validation set. These predictions become inputs to the meta-learner by aggregating the predictions of the base models and feeding them into the meta-learner. The meta-learner then learns to optimally combine these predictions, effectively harnessing the strengths and mitigating individual weaknesses of the base models [22].
- Meta-Learner (Ridge Regression): Ridge Regression, a linear regression model with regularization, acts as the meta-learner. This method penalizes large coefficients through regularization, which helps reduce overfitting and improve the generalizability of the ensemble model [25]. When evaluating different meta-learners, penalizing the weights was a main factor so no coefficient became too powerful. regularization adds a penalty term to the loss function that is equal to the sum of the squares of its coefficients. The decision to go with Ridge Regression as the meta-learner was both for its penalization of coefficients, balancing each baseline model’s results, and success in other studies like [26]. Wang and Lu [26] utilized ridge regression as the meta-learner in a stacked ensemble model, discovering that the fewer base models combined, the stronger ridge regression worked as a meta-learner. By combining predictions from LSTM and Transformer models, Ridge Regression optimally weights each base model’s output, leading to a more robust and accurate predictive model for corn prices.
3.4. Hyperparameter Tuning
3.5. Combined Model Architecture
- Independently train LSTM and Transformer base models on preprocessed historical corn price data.
- Generation of intermediate predictions from each base model.
- Combine these intermediate predictions using Ridge Regression as the meta-learner to produce the final forecast.
4. Experimental Study
4.1. Evaluation Metrics
- Mean Squared Error (): Measures the mean squared differences between predicted and actual values, highlighting the larger errors significantly.
- Root Mean Squared Error (): The square root of , providing error values in the same units as the original data, making interpretation more intuitive.
- Mean Absolute Percentage Error (): Represents average absolute error in percentage terms, useful for assessing relative prediction accuracy.
- Coefficient of Determination ( Score): Indicates how well the predicted values align with actual values, with a value closer to 1 indicating better accuracy.
4.2. Traditional Machine Learning Models
- Linear Regression: Linear Regression is one of the most fundamental statistical learning methods used for forecasting continuous outcomes. It models the dependent variable y as a linear combination of independent variables X, represented at , where are the model coefficients and is the error term. The Ordinary Least Squares (OLS) method is used to minimize the sum of squared residuals, optimizing the coefficients. Linear regression is a standard baseline to compare with for forecasting [27].
- Support Vector Regression: Support Vector Regression (SVR) extends the principles of Support Vector Machines, originally introduced in [28], by employing a kernel-based approach to linear and nonlinear relations. SVR finds a central hyperplane that deviates from the actual target values by at most a specified margin, denoted by . SVR then uses a kernal function, such as Radial Basis Function (RBF), polynomial, or sigmoid, that enables the model to project data into higher-dimensional spaces, creating a linear fit. SVR is another traditional machine learning model that can be found in time-series forecasting, making it a strong baseline to compare against.
- Extreme Gradient Boosting (XGBoost): XGBoost, standing for Extreme Gradient Boosting, is an optimized ensemble method based on gradient-boosted decision trees introduced in [29]. XGBoost sequentially builds an ensemble of weak learners, typically shallow decision trees. These weak learners attempt to correct the residual errors of the previous generations by minimizing a differentiable loss function through gradient descent. XGBoost has become a staple ensemble method to test as a baseline through the years to check for improvements.
4.3. Testing Results
4.4. Implemented Results
4.5. Discussion on Model Limitation and Potential Improvements
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Pandey, C.; Sethy, P.K.; Behera, S.K.; Vishwakarma, J.; Tande, V. Smart agriculture: Technological advancements on agriculture—A systematical review. In Deep Learning for Sustainable Agriculture; Academic Press: Cambridge, MA, USA, 2022; pp. 1–56. [Google Scholar]
- Westcott, P.C.; Hoffman, L.A. Price Determination for Corn and Wheat: The Role of Market Factors and Government Programs; Technical Bulletin No. 1878; AgEcon Search: St. Paul, MN, USA, 1999. [Google Scholar]
- Guo, H.; Woodruff, A.; Yadav, A. Improving lives of indebted farmers using deep learning: Predicting agricultural produce prices using convolutional neural networks. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13294–13299. [Google Scholar]
- Scheyett, A.; Marburger, I.L.; Scarrow, A.; Hollifield, S.M.; Dunn, J.W. What Do Farmers Need for Suicide Prevention: Considerations for a Hard-to-Reach Population. Neuropsychiatr. Dis. Treat. 2024, 20, 341–352. [Google Scholar] [CrossRef] [PubMed]
- Yuan, C.Z.; Ling, S.K. Long short-term memory model based agriculture commodity price prediction application. In Proceedings of the 2020 2nd International Conference on Information Technology and Computer Communications, Kuala Lumpur, Malaysia, 12–14 August 2020; pp. 43–49. [Google Scholar]
- Chen, Z.; Goh, H.S.; Sin, K.L.; Lim, K.; Chung, N.K.H.; Liew, X.Y. Automated agriculture commodity price prediction system with machine learning techniques. arXiv 2021, arXiv:2106.12747. [Google Scholar] [CrossRef]
- Gaur, S.; Mahajan, J.; Sharma, M.; Hussain, D.; Kakani, R.; Saxena, A. Precision Corn Price Prediction with Advanced ML Techniques. In Proceedings of the 2024 International Conference on Trends in Quantum Computing and Emerging Business Technologies, Pune, India, 22–23 March 2024; pp. 1–5. [Google Scholar]
- Wibowo, A.; Yasmina, I.; Wibowo, A. Food price prediction using time series linear ridge regression with the best damping factor. Adv. Sci. Technol. Eng. Syst. J. 2021, 6, 694–698. [Google Scholar] [CrossRef]
- Quan, P.; Shi, W. Application of CEEMDAN and LSTM for Futures Price Forecasting. In Proceedings of the 2024 International Conference on Machine Intelligence and Digital Applications, Ningbo, China, 30–31 May 2024; pp. 249–255. [Google Scholar]
- Sabu, K.M.; Kumar, T.M. Predictive analytics in Agriculture: Forecasting prices of Arecanuts in Kerala. Procedia Comput. Sci. 2020, 171, 699–708. [Google Scholar] [CrossRef]
- Ouyang, H.; Wei, X.; Wu, Q. Agricultural commodity futures prices prediction via long-and short-term time series network. J. Appl. Econ. 2019, 22, 468–483. [Google Scholar] [CrossRef]
- Cai, Y.; Zhang, N.; Zhang, S. GRU and LSTM Based Adaptive Prediction Model of Crude Oil Prices: Post-COVID-19 and Russian Ukraine War. In Proceedings of the 2023 6th International Conference on Computers in Management and Business, Macau, China, 13–15 January 2023; pp. 9–15. [Google Scholar]
- Zhou, J.; Ye, J.; Ouyang, Y.; Tong, M.; Pan, X.; Gao, J. On Building Real Time Intelligent Agricultural Commodity Trading Models. In Proceedings of the 2022 IEEE Eighth International Conference on Big Data Computing Service and Applications (BigDataService), Newark, CA, USA, 15–18 August 2022; pp. 89–95. [Google Scholar]
- Adhikari, B.; Sobin, C. Building Market Intelligence Systems for Agricultural Commodities: A Case Study based on Cardamom. In Proceedings of the 2021 2nd International Conference on Secure Cyber Computing and Communications (ICSCCC), Jalandhar, India, 21–23 May 2021; pp. 12–16. [Google Scholar]
- Sankareswari, K.; Sujatha, G. Evaluation of an Ensemble Technique for Prediction of Crop Yield. In Proceedings of the ICIMMI 2023: 5th International Conference on Information Management & Machine Intelligence, Jaipur, India, 23–25 November 2023. [Google Scholar] [CrossRef]
- Shahhosseini, M.; Hu, G.; Archontoulis, S.V. Forecasting corn yield with machine learning ensembles. Front. Plant Sci. 2020, 11, 1120. [Google Scholar] [CrossRef] [PubMed]
- Ribeiro, M.H.D.M.; Ribeiro, V.H.A.; Reynoso-Meza, G.; dos Santos Coelho, L. Multi-objective ensemble model for short-term price forecasting in corn price time series. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
- Silva, R.F.; Barreira, B.L.; Cugnasca, C.E. Prediction of corn and sugar prices using machine learning, econometrics, and ensemble models. Eng. Proc. 2021, 9, 31. [Google Scholar]
- Xiao, J.; Deng, T.; Bi, S. Comparative analysis of LSTM, GRU, and transformer models for stock price prediction. In Proceedings of the International Conference on Digital Economy, Blockchain and Artificial Intelligence, Guangzhou, China, 23–25 August 2024; pp. 103–108. [Google Scholar]
- Gade, S.; Singh, A.; Patil, S. Enriching Crop Yield and Price Prediction Using Transformers. Int. J. Res. Eng. Sci. Manag. 2024, 7, 87–93. [Google Scholar]
- Sagi, O.; Rokach, L. Ensemble learning: A survey. WIREs Data Min. Knowl. Discov. 2018, 8, e1249. [Google Scholar] [CrossRef]
- De Alwis, T.P.; Samadi, S.Y. Stacking-based neural network for nonlinear time series analysis. Stat. Methods Appl. 2024, 33, 901–924. [Google Scholar] [CrossRef]
- Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
- Ahmed, A.M.; Sharma, E.; Jui, S.J.J.; Deo, R.C.; Nguyen-Huy, T.; Ali, M. Kernel ridge regression hybrid method for wheat yield prediction with satellite-derived predictors. Remote Sens. 2022, 14, 1136. [Google Scholar] [CrossRef]
- Wang, Q.; Lu, H. A novel stacking ensemble learner for predicting residual strength of corroded pipelines. Npj Mater. Degrad. 2024, 8, 87. [Google Scholar] [CrossRef]
- Montgomery, D.C.; Peck, E.A.; Vining, G.G. Introduction to Linear Regression Analysis, 5th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
- Drucker, H.; Burges, C.J.C.; Kaufman, L.; Smola, A.; Vapnik, V. Support Vector Regression Machines. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 1997; Volume 9, pp. 155–161. [Google Scholar]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]





| Model | Strengths | Weaknesses | Role in This Study |
|---|---|---|---|
| ARIMA | Ideal for short-term forecasting of stationary and linear time series | Lacks the flexibility to handle non-linear dynamics, external factors, or long-range patterns | Used for baseline comparison in the literature; not implemented in our model |
| LSTM | Capable of modeling non-linear relationships and short-term temporal dependencies. | Limited in capturing long-range dependencies; may overfit to recent trends | Combined with Transformer to improve long-term sequence modeling |
| XGBoost | Powerful for non-linear regression and classification tasks | Lacks built-in mechanisms for modeling temporal dependencies in time series data. | Included in the literature review for contrast; not part of the final architecture |
| Transformers | Highly effective at modeling long-range dependencies through self-attention | Require large datasets and may underperform on short, noisy sequences | Used as a base learner to model extended temporal patterns |
| Ridge Regression (Meta-Learner) | Effectively manages multi-collinearity, limits overfitting, and enables efficient model ensemble integration. | Lacks the flexibility to capture non-linear or sequential patterns | Serves as the meta-learner in the stacking ensemble to combine base model outputs. |
| Traditional Ensembles | Improve robustness through model combination; simple to implement | Use static weights or averaging; lack a learning mechanism | Addressed by stacking ensemble with trainable meta-learner |
| Proposed Hybrid (Ours) | Integrates LSTM and Transformer outputs using Ridge Regression; adaptive and robust | Limited to univariate input; future work needed for multivariate extension | Core contribution of this study; improves accuracy over individual models |
| Model | Parameter | Search Range | Optimal Value |
|---|---|---|---|
| LSTM Model | units | 32, 64, 128 | units = 128 |
| batch_size | 8, 16, 32 | batch_size = 8 | |
| epochs | 20, 40, 60 | epochs = 40 | |
| dropout | 0.0, 0.2, 0.4 | dropout = 0.0 | |
| activation | relu/tanh | activation = tanh | |
| Transformer Model | num_heads | 2, 4, 8 | num_heads = 4 |
| dense_dim | 32, 64, 128 | dense_dim = 32 | |
| dropout | 0.1, 0.2, 0.3 | dropout = 0.1 | |
| learning_rate | 0.0005, 0.001 | learning_rate = 0.001 | |
| batch_size | 16, 32 | batch_size = 32 | |
| epochs | 20, 40 | epochs = 20 | |
| Stacking Meta-Learner (Ridge) | alpha | 0.001, 0.01, 0.1, 1, 10, 100 | alpha = 0.001 |
| solver | auto, svd, cholesky, lsqr, sparse_cg, saga | solver = auto | |
| fit_intercept | True/False | fit_intercept = True |
| Model | Parameter | Search Range | Optimal Value |
|---|---|---|---|
| Linear Regression | All parameters | Default (no tuning required) | Default configuration |
| Support Vector Regression (SVR) | C | 0.1, 1, 10 | C = 1 |
| epsilon | 0.001, 0.01, 0.1 | epsilon = 0.01 | |
| kernel | linear, rbf, poly | kernel = linear | |
| XGBoost Regressor | learning_rate | 0.001, 0.01, 0.1 | learning_rate = 0.01 |
| max_depth | 3, 5, 7 | max_depth = 3 | |
| n_estimators | 100, 200, 300 | n_estimators = 200 | |
| subsample | 0.8, 0.9, 1.0 | subsample = 1.0 |
| Model | (%) | Score | ||
|---|---|---|---|---|
| Linear Regression | 0.007 | 0.081 | 1.364 | 0.987 |
| XGBoost Regressor | 0.020 | 0.140 | 2.107 | 0.962 |
| SVR | 0.007 | 0.082 | 1.383 | 0.987 |
| LSTM | 0.012 | 0.111 | 1.590 | 0.991 |
| Transformer | 0.048 | 0.219 | 3.555 | 0.968 |
| Stacking Ensemble | 0.003 | 0.055 | 0.940 | 0.998 |
| Date | Actual | LSTM | Transformer | Ensemble | LSTM Error | Transformer Error | Ensemble Error |
|---|---|---|---|---|---|---|---|
| 15 January 2025 | 4.1307 | 4.08519 | 3.99710 | 4.02816 | 1.10 | 3.23 | 2.48 |
| 16 January 2025 | 4.0866 | 4.11898 | 4.00153 | 4.06386 | 0.79 | 2.08 | 0.56 |
| 17 January 2025 | 4.1834 | 4.15653 | 3.99990 | 4.10412 | 0.64 | 4.39 | 1.90 |
| 20 January 2025 | 4.2058 | 4.18362 | 3.99650 | 4.13337 | 0.53 | 4.98 | 1.72 |
| 21 January 2025 | 4.2282 | 4.21259 | 3.99138 | 4.16480 | 0.37 | 5.60 | 1.50 |
| 22 January 2025 | 4.1707 | 4.24393 | 3.99500 | 4.19794 | 1.76 | 4.21 | 0.65 |
| 23 January 2025 | 4.2223 | 4.27415 | 3.99488 | 4.23024 | 1.23 | 5.39 | 0.19 |
| 24 January 2025 | 4.1898 | 4.30499 | 3.99443 | 4.26323 | 2.75 | 4.66 | 1.75 |
| 27 January 2025 | 4.1448 | 4.33725 | 3.99810 | 4.29735 | 4.64 | 3.54 | 3.68 |
| 28 January 2025 | 4.1773 | 4.37031 | 4.00008 | 4.33249 | 4.62 | 4.24 | 3.72 |
| 29 January 2025 | 4.2935 | 4.40336 | 3.99007 | 4.36872 | 2.56 | 7.07 | 1.75 |
| 30 January 2025 | 4.2258 | 4.43742 | 3.98933 | 4.40517 | 5.01 | 5.60 | 4.24 |
| 31 January 2025 | 4.1422 | 4.47212 | 3.98805 | 4.44236 | 7.96 | 3.72 | 7.25 |
| 3 February 2025 | 4.2378 | 4.50736 | 3.98681 | 4.48012 | 6.36 | 5.92 | 5.72 |
| 4 February 2025 | 4.2945 | 4.54332 | 3.98579 | 4.51863 | 5.79 | 7.19 | 5.22 |
| 5 February 2025 | 4.2820 | 4.57999 | 3.98520 | 4.55787 | 6.96 | 6.93 | 6.44 |
| 6 February 2025 | 4.3263 | 4.61737 | 3.98418 | 4.59789 | 6.73 | 7.91 | 6.28 |
| 7 February 2025 | 4.2488 | 4.65549 | 3.98306 | 4.63872 | 9.57 | 6.25 | 9.18 |
| 10 February 2025 | 4.2880 | 4.69439 | 3.98186 | 4.68038 | 9.48 | 7.14 | 9.15 |
| 11 February 2025 | 4.2137 | 4.73406 | 3.98016 | 4.72293 | 12.35 | 5.54 | 12.09 |
| 12 February 2025 | 4.2762 | 4.77454 | 3.97807 | 4.76636 | 11.65 | 6.97 | 11.46 |
| 13 February 2025 | 4.3184 | 4.81596 | 3.97681 | 4.81072 | 11.52 | 7.91 | 11.40 |
| 14 February 2025 | 4.3459 | 4.85826 | 3.97549 | 4.85604 | 11.79 | 8.52 | 11.74 |
| Date | Actual | LSTM | Transformer | Ensemble | LSTM Error | Transformer Error | Ensemble Error |
|---|---|---|---|---|---|---|---|
| 17 February 2025 | 4.3748 | 4.90149 | 3.97418 | 4.90234 | 12.04 | 9.16 | 12.06 |
| 18 February 2025 | 4.4037 | 4.94564 | 3.97285 | 4.94963 | 12.31 | 9.78 | 12.40 |
| 19 February 2025 | 4.3619 | 4.99072 | 3.97149 | 4.99792 | 14.42 | 8.95 | 14.58 |
| 20 February 2025 | 4.3711 | 5.03675 | 3.97005 | 5.04722 | 15.23 | 9.17 | 15.47 |
| 21 February 2025 | 4.3036 | 5.08372 | 3.96857 | 5.09754 | 18.13 | 7.78 | 18.45 |
| 24 February 2025 | 4.1969 | 5.13163 | 3.96705 | 5.14887 | 22.27 | 5.48 | 22.68 |
| 25 February 2025 | 4.1682 | 5.18058 | 3.96550 | 5.20129 | 24.29 | 4.86 | 24.79 |
| 26 February 2025 | 4.1537 | 5.23051 | 3.96396 | 5.25478 | 25.92 | 4.57 | 26.51 |
| 27 February 2025 | 4.0264 | 5.28143 | 3.96248 | 5.30931 | 31.17 | 1.59 | 31.86 |
| 28 February 2025 | 3.9598 | 5.33332 | 3.96098 | 5.36489 | 34.69 | 0.03 | 35.48 |
| 3 March 2025 | 3.8917 | 5.38633 | 3.95945 | 5.42166 | 38.41 | 1.74 | 39.31 |
| 4 March 2025 | 3.8206 | 5.44066 | 3.95791 | 5.47985 | 42.40 | 3.59 | 43.43 |
| 5 March 2025 | 3.8745 | 5.49618 | 3.95634 | 5.53930 | 41.86 | 2.11 | 42.97 |
| 6 March 2025 | 3.9574 | 5.55288 | 3.95475 | 5.60003 | 40.32 | 0.07 | 41.51 |
| 7 March 2025 | 4.0099 | 5.61076 | 3.95314 | 5.66200 | 39.92 | 1.42 | 41.20 |
| 10 March 2025 | 4.0475 | 5.66964 | 3.95153 | 5.72506 | 40.08 | 2.37 | 41.45 |
| 11 March 2025 | 4.0300 | 5.72938 | 3.94990 | 5.78903 | 42.17 | 1.99 | 43.65 |
| 12 March 2025 | 3.9395 | 5.79002 | 3.94826 | 5.85396 | 46.97 | 0.22 | 48.60 |
| 13 March 2025 | 3.9868 | 5.85149 | 3.94661 | 5.91978 | 46.77 | 1.01 | 48.48 |
| 14 March 2025 | 3.9482 | 5.91376 | 3.94495 | 5.98646 | 49.78 | 0.08 | 51.63 |
| 17 March 2025 | 4.0004 | 5.97686 | 3.94327 | 6.05403 | 49.41 | 1.43 | 51.34 |
| 18 March 2025 | 3.9779 | 6.04072 | 3.94157 | 6.12240 | 51.86 | 0.91 | 53.91 |
| 19 March 2025 | 4.0260 | 6.10526 | 3.93985 | 6.19151 | 51.65 | 2.14 | 53.79 |
| 20 March 2025 | 4.0988 | 6.17043 | 3.93812 | 6.26130 | 50.54 | 3.92 | 52.76 |
| 21 March 2025 | 4.0519 | 6.23616 | 3.93638 | 6.33167 | 53.91 | 2.85 | 56.26 |
| 24 March 2025 | 4.0544 | 6.30236 | 3.93462 | 6.40256 | 55.44 | 2.95 | 57.92 |
| 25 March 2025 | 3.9938 | 6.36895 | 3.93284 | 6.47387 | 59.47 | 1.53 | 62.10 |
| 26 March 2025 | 3.9292 | 6.43586 | 3.93106 | 6.54552 | 63.80 | 0.05 | 66.59 |
| 27 March 2025 | 3.9181 | 6.50301 | 3.92925 | 6.61742 | 65.97 | 0.28 | 68.89 |
| 28 March 2025 | 3.9540 | 6.57031 | 3.92743 | 6.68948 | 66.17 | 0.67 | 69.18 |
| 31 March 2025 | 4.0220 | 6.63767 | 3.92559 | 6.76162 | 65.03 | 2.40 | 68.12 |
| 1 April 2025 | 4.1127 | 6.70503 | 3.92373 | 6.83374 | 63.03 | 4.59 | 66.16 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kaabi, J.; Harrath, Y.; Price, E. Utilizing Ensemble Learning Techniques to Enhance Corn Price Prediction: A Case Study on South Dakota. Agronomy 2025, 15, 2447. https://doi.org/10.3390/agronomy15112447
Kaabi J, Harrath Y, Price E. Utilizing Ensemble Learning Techniques to Enhance Corn Price Prediction: A Case Study on South Dakota. Agronomy. 2025; 15(11):2447. https://doi.org/10.3390/agronomy15112447
Chicago/Turabian StyleKaabi, Jihene, Youssef Harrath, and Ethan Price. 2025. "Utilizing Ensemble Learning Techniques to Enhance Corn Price Prediction: A Case Study on South Dakota" Agronomy 15, no. 11: 2447. https://doi.org/10.3390/agronomy15112447
APA StyleKaabi, J., Harrath, Y., & Price, E. (2025). Utilizing Ensemble Learning Techniques to Enhance Corn Price Prediction: A Case Study on South Dakota. Agronomy, 15(11), 2447. https://doi.org/10.3390/agronomy15112447

