Optimizing Convolutional Neural Networks, XGBoost, and Hybrid CNN-XGBoost for Precise Red Tilapia (Oreochromis niloticus Linn.) Weight Estimation in River Cage Culture with Aerial Imagery
Abstract
:1. Introduction
2. Materials and Methods
2.1. Ethical Statement and UAV Flight Permission
2.2. Study Site and Fish Sampling
2.3. Unmanned Aerial Vehicle (UAV)
2.4. Measurement of Water Quality and Wind Sampling
2.5. Image Acquisition and Pre-Image Analysis
2.6. Image Processing
2.7. Model Development Pipeline
2.8. Performance Evaluation
3. Results
3.1. Fish Weight, Water Quality, and Wind Speed
3.2. Model Performance
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Food & Agricultural Organization. The State of World Fisheries and Aquaculture 2020; FAO: Rome, Italy, 2020; Available online: https://www.fao.org/documents/card/en/c/ca9229en (accessed on 28 September 2023).
- Dey, M.M.; Gupta, M.V. Socioeconomics of disseminating genetically improved Nile tilapia in Asia: An introduction. Aquac. Econ. Manag. 2000, 4, 5–11. [Google Scholar] [CrossRef]
- Ansari, F.A.; Nasr, M.; Guldhe, A.; Gupta, S.K.; Rawat, I.; Bux, F. Techno-economic feasibility of algal aquaculture via fish and biodiesel production pathways: A commercial-scale application. Sci. Total Environ. 2020, 704, 135259. [Google Scholar] [CrossRef] [PubMed]
- Sgnaulin, T.; Durigon, E.G.; Pinho, S.M.; Jerônimo, T.; de Alcantara Lopes, D.L.; Emerenciano, M.G.C. Nutrition of genetically improved farmed tilapia (GIFT) in biofloc technology system: Optimization of digestible protein and digestible energy levels during nursery phase. Aquaculture 2020, 521, 734998. [Google Scholar] [CrossRef]
- Pongthana, N.; Nguyen, N.H.; Ponzoni, R.W. Comparative performance of four red tilapia strains and their crosses in fresh- and saline water environments. Aquaculture 2010, 308, S109–S114. [Google Scholar] [CrossRef]
- Tengtrairat, N.; Woo, W.L.; Parathai, P.; Rinchumphu, D.; Chaichana, C. Non-intrusive fish weight estimation in Turbid water using deep learning and regression models. Sensors 2022, 22, 5161. [Google Scholar] [CrossRef] [PubMed]
- Zion, B. The use of computer vision technologies in aquaculture—A review. Comput. Electron. Agric. 2012, 88, 125–132. [Google Scholar] [CrossRef]
- Li, D.; Hao, Y.; Duan, Y. Nonintrusive methods for biomass estimation in aquaculture with emphasis on fish: A review. Rev. Aquac. 2019, 12, 1390–1411. [Google Scholar] [CrossRef]
- Rodríguez Sánchez, V.; Rodríguez-Ruiz, A.; Pérez-Arjona, I.; Encina-Encina, L. Horizontal target strength-size conversion equations for sea bass and gilt-head bream. Aquaculture 2018, 490, 178–184. [Google Scholar] [CrossRef]
- Petrell, R.J.; Shi, X.; Ward, R.K.; Naiberg, A.; Savage, C.R. Determining fish size and swimming speed in cages and tanks using simple video techniques. Aquac. Eng. 1997, 16, 63–84. [Google Scholar] [CrossRef]
- Silva, T.S.D.C.; Santos, L.D.D.; Silva, L.C.R.D.; Michelato, M.; Furuya, V.R.B.F.; Furuya, W.M. Length-weight relationship and prediction equations of body composition for growing-finishing cage-farmed Nile tilapia. Rev. Bras. Zootec. 2015, 44, 133–137. [Google Scholar] [CrossRef]
- Ashley, P.J. Fish welfare: Current issue in aquaculture. Appl. Anim. Behav. Sci. 2007, 104, 199–235. [Google Scholar] [CrossRef]
- Viazzi, S.; Van Hoestenberghe, S.; Goddeeris, B.M.; Berckmans, D. Automatic mass estimation of Jade perch Scortum barcoo by computer vision. Aquac. Eng. 2015, 64, 42–48. [Google Scholar] [CrossRef]
- Torisawa, S.; Kadota, M.; Komeyama, K.; Suzuki, K.; Takagi, T. A digital stereo-video camera system for three-dimensional monitoring of free-swimming Pacific bluefin tuna, Thunnus orientalis, cultured in a net cage. Aquat. Living. Resour. 2011, 24, 107–112. [Google Scholar] [CrossRef]
- Konovalov, D.A.; Saleh, A.; Domingos, J.A.; White, R.D.; Jerry, D.R. Estimating mass of harvested Asian seabass Lates calcarifer from Images. World J. Eng. Technol. 2018, 6, 15–23. [Google Scholar] [CrossRef]
- Gümüş, E.; Yılayaz, A.; Kanyılmaz, M.; Gümüş, B.; Balaban, M.O. Evaluation of body weight and color of cultured European catfish (Silurus glanis) and African catfish (Clarias gariepinus) using image analysis. Aquac. Eng. 2021, 93, 102147. [Google Scholar] [CrossRef]
- Taparhudee, W.; Jongjaraunsuk, R. Weight estimation of Nile tilapia (Oreochromis niloticus Linn.) using image analysis with and without fins and tail. J. Fish. Environ. 2023, 47, 19–32. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Jiang, X.; Wang, Y.; Liu, W.; Li, S.; Liu, J. CapsNet, CNN, FCN: Comparative performance evaluation for image classification. Int. J. Mach. Learn. 2019, 9, 840–848. [Google Scholar] [CrossRef]
- Goodwin, A.; Padmanabhan, S.; Hira, S.; Glancey, M.; Slinowsky, M.; Immidisetti, R.; Scavo, L.; Brey, J.; Sudhakar, B.M.M.S.; Ford, T.; et al. Mosquito species identification using convolutional neural networks with a multitiered ensemble model for novel species detection. Sci. Rep. 2021, 11, 13656. [Google Scholar] [CrossRef]
- Meckbach, C.; Tiesmeyer, V.; Traulsen, I. A promising approach towards precise animal weight monitoring using convolutional neural networks. Comput. Electron. Agric. 2021, 183, 106056. [Google Scholar] [CrossRef]
- Rančić, K.; Blagojević, B.; Bezdan, A.; Ivošević, B.; Tubić, B.; Vranešević, M.; Pejak, B.; Crnojević, V.; Marko, O. Animal detection and counting from UAV images using convolutional neural networks. Drone 2023, 7, 179. [Google Scholar] [CrossRef]
- Chen, T.Q.; Guestrin, C. XGboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
- Jiao, W.; Hao, X.; Qin, C. The image classification method with CNN-XGBoost model based on adaptive particle swarm optimization. Information 2021, 12, 156. [Google Scholar] [CrossRef]
- Tseng, C.J.; Tang, C. An optimized XGBoost techinique for accurate brain tumor detection using feature selection and image segmentation. Healthc. Anal. 2023, 4, 100217. [Google Scholar] [CrossRef]
- Kwenda, C.; Gwetu, M.V.; Fonou-Dombeu, J.V. Forest image classification based on deep learning and XGBoost algorithm. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023. [Google Scholar]
- Hamzaoui, M.; Aoueileyine, M.O.E.; Romdhani, L.; Bouallegue, R. Optimizing XGBoost performance for fish weight prediction through parameter pre-selection. Fishes 2023, 8, 505. [Google Scholar] [CrossRef]
- Nurdin, Z.; Hidayat, T.; Irvanizam, I. Performance comparison of hybrid CNN-XGBoost and CNN-LightGBM methods in pneumonia detection. In Proceedings of the International Conference on Electrical Engineering and Informatics (ICELTICs), Banda Aceh, Indonesia, 27–28 September 2022; pp. 31–36. [Google Scholar]
- Zivkovic, M.; Bacanin, N.; Antonijevic, M.; Nikolic, B.; Kvascev, G.; Marjanovic, M.; Savanovic, N. Hybrid CNN and XGBoost model tuned by modified arithmetic optimization algorithm for COVID-19 early diagnostics from X-ray images. Electronics 2022, 11, 3798. [Google Scholar] [CrossRef]
- Murugan, D.; Garg, A.; Singh, D. Development of an adaptive approach for precision agriculture monitoring with drone and satellite data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 5322–5328. [Google Scholar] [CrossRef]
- Shahi, T.B.; Xu, C.Y.; Neupane, A.; Guo, W. Recent advances in crop disease detection using UAV and deep learning techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
- Fong, V.; Hoffmann, S.L.; Pate, J.H. Using drones to assess volitional swimming kinematics of manta ray behaviors in the wild. Drones 2022, 6, 111. [Google Scholar] [CrossRef]
- Taparhudee, W.; Jongjaraunsuk, R.; Nimitkul, S.; Mathurossuwan, W. Application of unmanned aerial vehicle (UAV) with area image analysis of red tilapia weight estimation in river-based cage culture. J. Fish. Environ. 2023, 47, 119–130. [Google Scholar]
- APHA. Standard Methods for the Examination of Water and Wastewater, 20th ed.; American Public Health Association, American Water Works Association, Water Environment Federation: Washington, DC, USA, 2005. [Google Scholar]
- Tran-Duy, A.; Van Dam, A.A.; Schrama, J.W. Feed intake, growth and metabolism of Nile tilapia (Oreochromis niloticus) in relation to dissolved oxygen concentration. Aquac. Res. 2012, 43, 730–744. [Google Scholar] [CrossRef]
- Lawson, T.B. Fundamentals of Aquacultural Engineering; Chapman & Hall: New York, NY, USA, 1995. [Google Scholar]
- Sriyasak, P.; Chitmanat, C.; Whangchai, N.; Promya, J.; Lebel, L. Effect of water de-stratification on dissolved oxygen and ammonia in tilapia ponds in Northern Thailand. Int. Aquat. Res. 2012, 7, 287–299. [Google Scholar] [CrossRef]
- Wallace, L.; Lucieer, A.; Watson, C.; Turner, D. Development of a UAV-LiDAR system with application to forest inventory. Remote Sens. 2012, 4, 1519–1543. [Google Scholar] [CrossRef]
- Seifert, E.; Seifert, S.; Vogt, H.; Drew, D.; Aardt, J.V.; Kunneke, A.; Seifert, T. Influence of drone altitude, image overlap, and optical sensor resolution on multi-view reconstruction of forest images. Remote Sens. 2019, 11, 1252. [Google Scholar] [CrossRef]
- Ren, X.; Guo, H.; Li, S.; Wang, S.; Li, J. A Novel image classification method with CNN-XGBoost Model. In Digital Forensics and Watermarking, Proceedings of the 6th International Workshop, IWDW 2017, Magdeburg, Germany, 23–25 August 2017; Lecture Notes in Computer Science; Kraetzer, C., Shi, Y.Q., Dittmann, J., Kim, H., Eds.; Springer: Cham, Switzerland, 2017; Volume 10431. [Google Scholar]
- Zhang, J.; Zhuang, Y.; Ji, H.; Teng, G. Pig weight and body size estimation using a multiple output regression convolutional neural network: A fast and fully automatic method. Sensor 2021, 21, 3218. [Google Scholar] [CrossRef]
- Suwannasing, P.; Jongjaraunsuk, R.; Yoonpundh, R.; Taparhudee, W. A comparison of Image segmentation and image non-segmentation to classify average weight of red tilapia using machine learning techniques (Thai). Burapha Sci. J. 2023, 28, 208–222. [Google Scholar]
Specification | Value and Description |
---|---|
Flight time | 34 min |
Max service ceiling above sea level | 5000 m |
Transmission system | OcuSync 2.0 |
Weight | 595 g |
Folded size | 180 × 97 × 77 mm (length × width × height) |
Max speed | 6 m/s (standard mode) |
Maximum wind speed resistance | 10.7 m/s |
Obstacle avoidance | 3-Direction cameras and IR |
Special features | 4 K/60, HDR, 48 MP Photos |
Phone charging | Available |
Takeoff and landing light | Available |
Internal storage | 8 GB |
Flight | Date | Number of Cages | Number of Images/Cages | Total Number of Images |
---|---|---|---|---|
1 | 22 January 2023 | 8 | 50 | 400 |
2 | 3 February 2023 | 8 | 50 | 400 |
3 | 18 February 2023 | 8 | 50 | 400 |
4 | 4 March 2023 | 8 | 50 | 400 |
5 | 17 March 2023 | 8 | 50 | 400 |
6 | 1 April 2023 | 8 | 50 | 400 |
7 | 22 April 2023 | 8 | 50 | 400 |
8 | 6 May 2023 | 8 | 50 | 400 |
9 | 20 May 2023 | 8 | 50 | 400 |
3600 |
Model | Structure |
---|---|
CNN | # Define the CNN model Model = tf.keras.sequential([ Tf.keras.layers.conv2D (32, (3, 3), input_shape = (64, 64, 1), activation = ‘relu’), Tf.keras.layers.maxpooling2D (pool_size = (2, 2)), Tf.keras.layers.conv2D (64, (3, 3), activation = ‘relu’), Tf.keras.layers.maxpooling2D (pool_size = (2, 2)), Tf.keras.layers.flatten(), Tf.keras.layers.dense (64, activation = ‘relu’), Tf.keras.layers.dense (len(classes), activation = ‘softmax’) ]) # Compile the model Model.compile (optimizer = ‘adam’, loss = ‘sparse_categorical_crossentropy’, metrics = [‘accuracy’]) # Train the model History = model.fit (x_train, y_train, epochs = fine-tuned, validation_data = (x_val, y_val)) |
XGBoost | # Define the XGBoost model Model = xgb.xgbClassifier (objective = ‘multi:softmax’, num_class = len (classes), eval_metric = ‘mlogloss’, n_estimators = fine-tuned) # Train the model Model.fit (x_train, y_train) |
CNN-XGBoost | # Define the CNN model # Extract features using the CNN model CNN_features _train = CNN_model.predict (x_train) CNN_features_val = CNN_model.predict (x_val) CNN_features_test = CNN_model.predict (x_test) # Train the CNN model (epochs = fine-tuned) # Combine CNN features with original features X_train_combined = np. concatenate ((x_train.reshape ((x_train.shape [0], −1)), CNN_features_train)), axis = 1) X_val_combined = np. concatenate ((x_val.reshape ((x_val.shape [0], −1)), CNN_features_val), axis = 1) X_test_combined = np. concatenate ((x_test.reshape ((x_test.shape [0], −1)), CNN_features_test), axis = 1) # Define the XGBoost model (n_estimators = fine-tuned) # Train the model Model.fit (x_train_combined, y_train) |
Weight Class | Fish Weight (g/Fish) | DO (mg/L) | Temp (°C) | pH | ALK (mg/L) | TAN (mg/L) | Trans (cm) | Wind Speed (m/s) |
---|---|---|---|---|---|---|---|---|
1 | 119.38–170.28 | 4.26 ± 0.04 | 25.90 ± 0.00 | 7.47 ± 0.01 | 122.34 ± 6.13 | 0.09 ± 0.01 | 70 ± 0 | 1.32 ± 0.04 |
2 | 180.65–237.19 | 3.73 ± 0.07 | 25.20 ± 0.00 | 7.44 ± 0.00 | 126.00 ± 5.66 | 0.10 ± 0.02 | 80 ± 0 | 0.89 ± 0.07 |
3 | 234.33–310.31 | 3.80 ± 0.09 | 26.20 ± 0.00 | 7.46 ± 0.00 | 112.00 ± 2.83 | 0.12 ± 0.03 | 70 ± 0 | 1.26 ± 0.08 |
4 | 308.76–391.24 | 4.43 ± 0.10 | 26.05 ± 0.21 | 7.52 ± 0.01 | 110.00 ± 1.41 | 0.11 ± 0.04 | 70 ± 0 | 1.06 ± 0.47 |
5 | 404.07–496.41 | 3.95 ± 0.11 | 27.13 ± 0.04 | 7.53 ± 0.01 | 114.34 ± 3.30 | 0.10 ± 0.03 | 100 ± 0 | 0.66 ± 0.11 |
6 | 474.59–568.17 | 3.48 ± 0.08 | 28.35 ± 0.21 | 7.53 ± 0.01 | 122.00 ± 2.83 | 0.15 ± 0.01 | 90 ± 0 | 0.78 ± 0.01 |
7 | 564.53–662.61 | 3.66 ± 0.08 | 29.27 ± 0.04 | 7.52 ± 0.01 | 109.43 ± 4.85 | 0.17 ± 0.02 | 98 ± 5 | 1.98 ± 0.17 |
8 | 625.92–741.06 | 3.40 ± 0.19 | 29.69 ± 0.05 | 7.59 ± 0.10 | 105.67 ± 10.84 | 0.20 ± 0.03 | 90 ± 0 | 0.81 ± 0.34 |
9 | 695.91–830.29 | 3.82 ± 0.20 | 29.60 ± 0.42 | 7.60 ± 0.02 | 107.00 ± 4.24 | 0.20 ± 0.04 | 105 ± 0 | 2.27 ± 0.77 |
Model | Adjusted | Accuracy | Precision | Recall | F1 Score | Processing Time/Image |
---|---|---|---|---|---|---|
CNN | 10 epochs | 0.520 ± 0.037 | 0.542 ± 0.038 | 0.506 ± 0.034 | 0.490 ± 0.051 | 0.840 s |
15 epochs | 0.544 ± 0.055 | 0.558 ± 0.063 | 0.538 ± 0.054 | 0.516 ± 0.063 | 0.920 s | |
20 epochs | 0.620 ± 0.042 | 0.628 ± 0.051 | 0.614 ± 0.045 | 0.608 ± 0.046 | 0.980 s | |
25 epochs | 0.666 ± 0.043 | 0.692 ± 0.034 | 0.662 ± 0.048 | 0.656 ± 0.043 | 1.120 s | |
30 epochs | 0.702 ± 0.036 | 0.716 ± 0.038 | 0.698 ± 0.033 | 0.694 ± 0.036 | 1.280 s | |
35 epochs | 0.692 ± 0.022 | 0.706 ± 0.021 | 0.684 ± 0.026 | 0.680 ± 0.025 | 1.560 s | |
40 epochs | 0.710 ± 0.027 | 0.720 ± 0.025 | 0.708 ± 0.029 | 0.708 ± 0.029 | 1.660 s | |
45 epochs | 0.720 ± 0.029 | 0.728 ± 0.033 | 0.716 ± 0.034 | 0.716 ± 0.032 | 1.900 s | |
50 epochs | 0.678 ± 0.045 | 0.688 ± 0.043 | 0.674 ± 0.044 | 0.670 ± 0.043 | 2.000 s | |
55 epochs | 0.730 ± 0.007 | 0.730 ± 0.016 | 0.722 ± 0.011 | 0.720 ± 0.007 | 2.260 s | |
60 epochs | 0.748 ± 0.019 | 0.750 ± 0.019 | 0.740 ± 0.014 | 0.740 ± 0.019 | 2.540 s | |
65 epochs | 0.742 ± 0.016 | 0.748 ± 0.013 | 0.738 ± 0.018 | 0.738 ± 0.018 | 2.880 s | |
70 epochs | 0.730 ± 0.040 | 0.736 ± 0.029 | 0.728 ± 0.034 | 0.724 ± 0.037 | 2.980 s | |
XGBoost | 10 n_estimators | 0.480 ± 0.000 | 0.470 ± 0.000 | 0.470 ± 0.000 | 0.470 ± 0.000 | 0.420 s |
15 n_estimators | 0.500 ± 0.000 | 0.490 ± 0.000 | 0.490 ± 0.000 | 0.490 ± 0.000 | 0.440 s | |
20 n_estimators | 0.510 ± 0.000 | 0.500 ± 0.000 | 0.500 ± 0.000 | 0.500 ± 0.000 | 0.500 s | |
25 n_estimators | 0.530 ± 0.000 | 0.520 ± 0.000 | 0.520 ± 0.000 | 0.520 ± 0.000 | 0.560 s | |
30 n_estimators | 0.530 ± 0.000 | 0.520 ± 0.000 | 0.510 ± 0.000 | 0.510 ± 0.000 | 0.600 s | |
35 n_estimators | 0.540 ± 0.000 | 0.530 ± 0.000 | 0.520 ± 0.000 | 0.520 ± 0.000 | 0.660 s | |
40 n_estimators | 0.550 ± 0.000 | 0.540 ± 0.000 | 0.540 ± 0.000 | 0.540 ± 0.000 | 0.700 s | |
45 n_estimators | 0.560 ± 0.000 | 0.550 ± 0.000 | 0.550 ± 0.000 | 0.550 ± 0.000 | 0.720 s | |
50 n_estimators | 0.560 ± 0.000 | 0.550 ± 0.000 | 0.550 ± 0.000 | 0.540 ± 0.000 | 0.760 s | |
55 n_estimators | 0.560 ± 0.000 | 0.540 ± 0.000 | 0.540 ± 0.000 | 0.540 ± 0.000 | 0.780 s | |
60 n_estimators | 0.560 ± 0.000 | 0.550 ± 0.000 | 0.550 ± 0.000 | 0.540 ± 0.000 | 0.820 s | |
Hybrid CNN-XGBoost | 10 epochs/10 n_estimators | 0.722 ± 0.030 | 0.716 ± 0.031 | 0.716 ± 0.031 | 0.714 ± 0.029 | 0.920 s |
15 epochs/15 n_estimators | 0.720 ± 0.024 | 0.714 ± 0.021 | 0.712 ± 0.022 | 0.712 ± 0.022 | 1.040 s | |
20 epochs/20 n_estimators | 0.750 ± 0.019 | 0.750 ± 0.019 | 0.744 ± 0.017 | 0.740 ± 0.019 | 1.220 s | |
25 epochs/25 n_estimators | 0.734 ± 0.021 | 0.736 ± 0.023 | 0.728 ± 0.020 | 0.726 ± 0.018 | 1.540 s | |
30 epochs/30 n_estimators | 0.746 ± 0.019 | 0.744 ± 0.017 | 0.738 ± 0.022 | 0.740 ± 0.021 | 1.740 s | |
35 epochs/35 n_estimators | 0.758 ± 0.023 | 0.754 ± 0.022 | 0.750 ± 0.023 | 0.750 ± 0.023 | 1.800 s | |
40 epochs/40 n_estimators | 0.748 ± 0.011 | 0.752 ± 0.015 | 0.742 ± 0.015 | 0.742 ± 0.015 | 1.980 s | |
45 epochs/45 n_estimators | 0.760 ± 0.019 | 0.762 ± 0.019 | 0.754 ± 0.019 | 0.752 ± 0.019 | 2.240 s | |
50 epochs/50 n_estimators | 0.746 ± 0.027 | 0.746 ± 0.027 | 0.740 ± 0.024 | 0.742 ± 0.026 | 2.440 s | |
55 epochs/55 n_estimators | 0.734 ± 0.021 | 0.736 ± 0.022 | 0.728 ± 0.022 | 0.726 ± 0.022 | 2.480 s | |
60 epochs/60 n_estimators | 0.746 ± 0.017 | 0.750 ± 0.021 | 0.744 ± 0.017 | 0.744 ± 0.017 | 2.740 s |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Taparhudee, W.; Jongjaraunsuk, R.; Nimitkul, S.; Suwannasing, P.; Mathurossuwan, W. Optimizing Convolutional Neural Networks, XGBoost, and Hybrid CNN-XGBoost for Precise Red Tilapia (Oreochromis niloticus Linn.) Weight Estimation in River Cage Culture with Aerial Imagery. AgriEngineering 2024, 6, 1235-1251. https://doi.org/10.3390/agriengineering6020070
Taparhudee W, Jongjaraunsuk R, Nimitkul S, Suwannasing P, Mathurossuwan W. Optimizing Convolutional Neural Networks, XGBoost, and Hybrid CNN-XGBoost for Precise Red Tilapia (Oreochromis niloticus Linn.) Weight Estimation in River Cage Culture with Aerial Imagery. AgriEngineering. 2024; 6(2):1235-1251. https://doi.org/10.3390/agriengineering6020070
Chicago/Turabian StyleTaparhudee, Wara, Roongparit Jongjaraunsuk, Sukkrit Nimitkul, Pimlapat Suwannasing, and Wisit Mathurossuwan. 2024. "Optimizing Convolutional Neural Networks, XGBoost, and Hybrid CNN-XGBoost for Precise Red Tilapia (Oreochromis niloticus Linn.) Weight Estimation in River Cage Culture with Aerial Imagery" AgriEngineering 6, no. 2: 1235-1251. https://doi.org/10.3390/agriengineering6020070
APA StyleTaparhudee, W., Jongjaraunsuk, R., Nimitkul, S., Suwannasing, P., & Mathurossuwan, W. (2024). Optimizing Convolutional Neural Networks, XGBoost, and Hybrid CNN-XGBoost for Precise Red Tilapia (Oreochromis niloticus Linn.) Weight Estimation in River Cage Culture with Aerial Imagery. AgriEngineering, 6(2), 1235-1251. https://doi.org/10.3390/agriengineering6020070