Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge
Abstract
1. Introduction
- We asked experienced practitioners of synecological farming to perform data labeling for sowing instructions and analyzed the trends of the quantity/location of sowing to construct an ecosystem.
- Based on the analysis results, we created a framework for sowing evaluation and designed features for understanding the vegetation conditions, including ISOM.
- By calculating identified key features using image processing technology, we trained a model that predicts the sowing quantity/positions.
- We evaluated the model using data labeled by the experienced practitioners.
2. Materials and Methods
2.1. Operating Robot
2.2. Data Acquisition and Processing Operation Verification Environment
- OS: Windows 10
- CPU: 12th Gen Intel® Core™ i7-1265U 1.80 GHz
- RAM: 32.0 GB
- CPU Geekbench 6 benchmark with a score of 2154 points (one CPU core) and 4637 points (all CPU cores) [50].
2.3. Dataset
- Amateur: A person who has a basic understanding of the synecological farming method but has never practiced it.
- Beginner: A person engaged in synecological farming for about one year. He/she has no academic background in agronomy.
- Intermediate: A person engaged in synecological farming for more than two years. His/her academic background is in agronomy or biology, and his/her research topics relate to agronomy and synecological farming.
- Advanced: A person engaged in synecological farming for more than four years and who is in a synecological farming organization, frequently manages and operates densely mixed polyculture fields, and is familiar with the synecological farming method. As for research, he/she previously majored in biology and is currently researching synecological farming.
- Supervised-Advanced: Among the data for which the advanced proficiency level person performed sowing position labeling, some data should have been labeled as sowing position but was not due to a recognition mistake. Specifically, in the upper-right-center area of Image 1, it was mistakenly thought that the area’s topsoil was covered with plantation even though it was not. When questioned, this person responded that if the area was not covered, then it should have been labeled as a sowing area. We therefore supplemented this area by referring to and integrating the labeling data of Intermediate, which had been properly designated for sowing.
3. Results
3.1. Analysis of Topsoil Plantation Coverage Area Detection Process
- TP: True positive is the number correctly classified as positive by the prediction model. It is the number of cases that were predicted to be true where they were actually true.
- TN: True negative is the number of cases correctly classified as negative by the predictive model. It is the number of cases that were predicted to be false when they were actually false.
- FP: False positive is the number of predictions that were missed, i.e., predicted to be true but were false.
- FN: False negative is the number of predictions that were missed, i.e., predicted to be false but were true.
3.2. Analysis of Human-Labeled Sowing Data by Proficiency Level
- Amateurs tended to sow only in the area with no coverage of plantation and cohesive areas, so many areas that should have been sown were not. In contrast, beginner/intermediate/advanced practitioners, who are experienced in synecological farming, tended to designate sowing areas that the amateur overlooked and covered the entire area well. However, there were some cases of mistaken or overlooked areas that needed sowing, such as the areas that were recognized as covered but were not; this occurred even for the advanced practitioners.
- The number of sows tended to be either too much or too little in amateur/beginner, while in intermediate/advanced, it tended to be appropriate. Specifically, amateurs/beginners tended to be oversensitive to uncovered areas and over-sowed. On the other hand, the intermediate/advanced group tended to perform an appropriate number of sows even in uncovered areas, depending on the surrounding vegetation and the cohesiveness of the uncovered area, and tended to specify sowing according to the vegetation conditions even when the uncovered area was small.
- The amateur/beginner group was not able to designate sowing based on the current growth status of vegetation and topsoil plantation coverage after time had elapsed, and it was strongly biased by visual information on the extent of the uncovered areas. In contrast, the intermediate/advanced group specified sowing in consideration of both the current vegetation growth and the future topsoil plantation coverage conditions.
3.3. Sowing Position Prediction Process
3.3.1. Process Flow of Sowing Position Prediction Process
- ISOM-AS: This word is an abbreviation for Integrated Inter-Subjective Objective Model—Appraisal Score. The model is trained to predict appraisal scores by utilizing the ISOM framework and performs the subjective evaluation of how good the field is in terms of synecological farming [6]. The original model used 241-dimensional features, which were a combination of 23-dimensional features using AMeDAS information (meteorological information) and 218-dimensional features extracted from images. In our study, the model was modified to use only 218-dimensional features extracted from images, excluding AMeDAS information so that it can be used universally even in cases where AMeDAS information cannot be obtained. The features can be summarized as follows: First, various color-related information (e.g., average value of red, standard deviation value of blue, etc.) and edge-related information are extracted from the entire segmented image in 95 dimensions. These include information showing the relationship between each RGB color (e.g., green and blue covariance, red mean–blue mean, etc.), information related to the HSV space, and information output using the Gray-Level Co-occurrence Matrix (GLCM) related to texture. Next, the same 95-dimensional features are extracted from the areas cut out by the coverage area detection process. Finally, information related to the coverage rate estimated by the coverage area detection process is obtained, and the coverage rate of each segmented image is reintegrated into its pre-segmentation state, captured as a coverage rate matrix, and information extracted by applying GLCM to it is obtained as 28-dimensional features. These features are used as 218-dimensional features. For details, please refer to the Supplementary Material.
- Coverage Ratio: This is one of the original features used in ISOM-AS, which we upgraded for our study. It indicates the ratio of the topsoil plantation coverage area obtained by applying the covered area detection process to the entire image.
- Uncoverage Ratio: The reason for creating this feature was that people with high proficiency made a conscious decision not to sow in areas with many freshly germinated plants. Specifically, we designed this feature to output a higher coverage ratio when there were areas with many freshly germinated plants, which means it shows a difference from feature 2 and can recognize the presence of freshly germinated plants and how they are distributed. The specific process function is as follows. If we use feature 2 when there is an area with many newly germinated plants, the area is recognized as a mosaic-like area with a plantation-covered area and a bare area. Therefore, to exclude this area from the sowing target, the area is output as a covered area by adopting the “Closing Process” image processing. The uncovered area/rate is then calculated by calculating the uncovered area from the covered area using a reversed image processing function. The “Closing Process” is a series of processes that apply to binarized image data. It first expands its area by specified kernel size pixels and then shrinks by the same kernel size pixels. This process is useful for areas that need to be connected without changing the total size of the area too much. Here, we set the size of the kernel applied during the expansion and contraction processes to 5 × 5 pixels, and the number of iterations, which accounts for the number of iterations of the expansion and contraction process, is set to 2.
- Sensed Height Average: This feature represents the average of the height of the divided area of the height map. Since we assume the field’s topsoil is flat, if the height information is greater than that of the soil, it indicates the presence of large growing plants. Practitioners with a higher proficiency grade tend to sow differently by height depending on the extent and number of growing plants, so we designed and introduced this feature to help perceive that area.
3.3.2. Evaluation of Sowing Number Estimator Model
3.4. Path Planning Process and Evaluation
- The height at which the robot arm is raised so that it does not damage the plants is determined, and the robot moves through the planned path at that height.
- When it reaches the target coordinates for sowing in the XY-coordinates, the robot is controlled in the Z-axis direction (vertical direction) to execute sowing.
- The robot is then raised to the same height in the Z-axis direction. The control is repeated to move on the XY-space to the next sowing coordinate.
- Using the 2D height data, the sowing position coordinates indicated in 2D image space are converted to 3D real space coordinates in units of meter.
- The RGB image is used as the input to perform the coverage area prediction and grouped uncoverage area prediction processes.
- By comparing the estimated sowing position and predicted coverage area, it is determined whether the estimated sowing position coordinates are specified on the vegetation or not.
- If the sowing position coordinates are not on the vegetation, it is assumed that there is no problem. However, if they are on vegetation, the Z-axis is quoted in a higher position and might cause a malfunction, so a non-vegetated area is derived using the results of the grouped uncoverage area prediction process, and the height of the soil in the surrounding area is estimated by deriving the average of the height data for that area. That value is then used as the Z-axis value.
- The path planning in XY-space is estimated as the appropriate arm movement height in the Z-axis is set, and the output is a control moment matrix in XYZ-space, in which a series of robotic movements are generated.
3.5. Integrated Overall Evaluation Results
4. Discussion
- (1)
- To improve the accuracy of sowing location estimation, currently, the number of sowings per area is estimated by extracting 218-dimensional image features from the image and using four types of information as features: the output of the ISOM-AS inter-personal objectivity model, the coverage rate, the uncovered area rate, and the sensed plantation height average value. The number of sowing estimation model performances is presumed to be between intermediate and advanced in terms of the synecological farming method’s proficiency grade. To improve versatility, we recommend collecting and training a large amount of data on various vegetation conditions, since the number of training and test data is currently small. In addition, to improve accuracy, it would be effective to increase the number of features that are deemed necessary for sowing, as the four features currently used are still considered insufficient as information for decision-making by experts in sowing. Specifically, the information on the current plantation height uses only the average value, so there is a high possibility that effective features can be found by increasing the resolution of this vegetation height information as a feature value.
- (2)
- To speed up the sowing operation, the current operation time is still not fast enough for actual use. In our evaluation, a comprehensive comparison was made from the processing time of the software that performs the estimation calculation processing to the actual control time required for the hardware control of the robot, and it was found that the bottleneck of the overall time required is the control time of the Z-axis. There are two major approaches to improving this. The first is to make the path planning more sophisticated. Specifically, there is room for improvement in the optimization of the Z-axis, potentially by improving the software to minimize the Z-axis movement, which performs adaptive path planning to the plantation height of the observed area. The second approach is to increase the Z-axis movement speed, since the Z-axis movement speed of the current hardware is slow compared to others.
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Funabashi, M. Human augmentation of ecosystems: Objectives for food production and science by 2045. NPJ Sci. Food 2018, 2, 16. [Google Scholar] [CrossRef] [PubMed]
- Funabashi, M. Synecological farming: Theoretical foundation on biodiversity responses of plant communities. Plant Biotechnol. Spec. Issue Plants Environ. Responses 2016, 33, 213–234. [Google Scholar] [CrossRef] [PubMed]
- Funabashi, M. Synecoculture Manual 2016 Version (English Version). Research and Education Material of UniTwin UNESCO Complex Systems Digital Campus, e-Laboratory: Open Systems Exploration for Ecosystems Leveraging 2016, 2. Available online: https://synecoculture.sonycsl.co.jp/public/2016%20Synecoculture%20Manual_compressed.pdf (accessed on 2 December 2024).
- Funabashi, M. Power-law productivity of highly biodiverse agroecosystems supports land recovery and climate resilience. NPJ Sustain. Agric. 2024, 2, 8. [Google Scholar] [CrossRef]
- Ohta, K.; Kawaoka, T.; Funabashi, M. Secondary Metabolite Differences between Naturally Grown and Conventional Coarse Green Tea. Agriculture 2020, 10, 632. [Google Scholar] [CrossRef]
- Aotake, S.; Takanishi, A.; Funabashi, M. Modeling ecosystem management based on the integration of image analysis and human subjective evaluation—Case studies with synecological farming. Lect. Notes Comput. Sci. 2023, 13927, 151–164. Available online: https://synecoculture.sonycsl.co.jp/public/20230420%2033_fullpaper.pdf (accessed on 2 December 2024).
- Otani, T.; Itoh, A.; Mizukami, H.; Murakami, M.; Yoshida, S.; Terae, K.; Tanaka, T.; Masaya, K.; Aotake, S.; Funabashi, N.; et al. Agricultural Robot under Solar Panels for Sowing, Pruning, and Harvesting in a Synecoculture Environment. Agriculture 2023, 13, 18. [Google Scholar] [CrossRef]
- Doi, A.; Maeda, N.; Tanaka, T.; Masaya, K.; Aotake, S.; Funabashi, M.; Miki, H.; Otani, T.; Takanishi, A. Development of the Agricultural Robot in Synecocutlure™ Environment (8th Report, Development of sow planting mechanism for multiple sows interchangeable at the end of the arm and sow dumpling making machine). J. Robot. Soc. Jpn. 2024, 42, 10, 1031–1034. [Google Scholar] [CrossRef]
- Gaston, K.; O’Neill, M.A. Automated species identification: Why not? Philos. Trans. R. Soc. Lond. B Biol. Sci. 2004, 359, 655–667. [Google Scholar] [CrossRef] [PubMed]
- Pimm, S.; Alibhai, S.; Bergl, R.; Dehgan, A.; Giri, C.; Jewell, Z.; Joppa, L.; Lays, R.; Loarie, S. Emerging technologies to conserve biodiversity. Trends Ecol. Evol. 2015, 30, 685–696. [Google Scholar] [CrossRef] [PubMed]
- Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed]
- Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
- Wäldchen, J.; Rzanny, M.; Seeland, M.; Mäder, P. Automated plant species identification—Trends and future directions. PLoS Comput. Biol. 2018, 14, e1005993. [Google Scholar] [CrossRef] [PubMed]
- Tian, L.; Slaughter, D. Environmentally adaptive segmentation algorithm for outdoor image segmentation. Comput. Electron. Agric. 1998, 21, 153–168. [Google Scholar] [CrossRef]
- Wäldchen, J.; Mädaer, P. Machine learning for image-based species identification. Methods Ecol. Evol. 2018, 9, 2216–2225. [Google Scholar] [CrossRef]
- Carranza-Rojas, J.; Goeau, H.; Bonnet, P.; Meta-Montero, E.; Joly, A. Going deeper in the automated identification of Herbarium specimens. BMC Evol. Biol. 2017, 17, 181. [Google Scholar] [CrossRef] [PubMed]
- Joly, A.; GoeÉau, H.; Bonnet, P.; Bakic, V.; Barbe, J.; Selmi, S.; Yahiaoui, I.; Carré, J.; Mouysset, E.; Molino, J.F.; et al. Interactive plant identification based on social image data. Ecol. Inform. 2014, 23, 22–34. [Google Scholar] [CrossRef]
- Umar, M.; Altaf, S.; Ahmad, S.; Mahmoud, H.; Mohamed, A.S.N.; Ayub, R. Precision Agriculture Through Deep Learning: Tomato Plant Multiple Diseases Recognition With CNN and Improved YOLOv7. IEEE Access 2024, 12, 49167–49183. [Google Scholar] [CrossRef]
- Yu, F.; Zhang, Q.; Xiao, J.; Ma, Y.; Wang, M.; Luan, R.; Liu, X.; Ping, Y.; Nie, Y.; Tao, Z.; et al. Progress in the Application of CNN-Based Image Classification and Recognition in Whole Crop Growth Cycles. Remote Sens. 2023, 15, 2988. [Google Scholar] [CrossRef]
- Lu, J.; Tan, L.; Jiang, H. Review on Convolutional Neural Network (CNN) Applied to Plant Leaf Disease Classification. Agriculture 2021, 11, 707. [Google Scholar] [CrossRef]
- Congalton, R.; Gu, J.; Yadav, K.; Thenkabail, P.; Ozdogan, M. Global land cover mapping: A review and uncertainty analysis. Remote Sens. 2014, 6, 12070–12093. [Google Scholar] [CrossRef]
- Khatami, R.; Mountrakis, G.; Stehman, S.V. A meta-analysis of remote sensing research on supervised pixel-based land-cover image classification processes: General guidelines for practitioners and future research. Remote Sens. Environ. 2016, 177, 89–100. [Google Scholar] [CrossRef]
- Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2017, 130, 270–293. [Google Scholar] [CrossRef]
- Fassnacht, F.E.; Latifi, H.; Sterenczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
- Guirado, E.; Tabil, S.; Segura, D.; Cabello, J.; Herrera, F. Deep-Learning versus OBIA for Scattered Shrub Detection with Google Earth Imagery: Ziziphus lotus as Case Study. Remote Sens. 2017, 9, 1220. [Google Scholar] [CrossRef]
- Guirado, E.; Segura, D.; Cabello, J.; Ruiz, S.; Herrera, F.; Tabik, S. Tree Cover Estimation in Global Drylands from Space Using Deep Learning. Remote Sens. 2020, 12, 343. [Google Scholar] [CrossRef]
- Onishi, M.; Ise, T. Automatic classification of trees using UAV onboard camera and deep learning. arXiv 2018, arXiv:1804.10390. [Google Scholar] [CrossRef]
- Goodwin, N.; Turner, R.; Merton, R. Classifying Eucalyptus forests with high spatial and spectral resolution imagery: An investigation of individual species and vegetation communities. Aust. J. Bot. 2005, 53, 337–345. [Google Scholar] [CrossRef]
- Dalponte, M.; Bruzzone, L.; Gianelle, D. Remote Sensing of Environment Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
- Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with random forest using very high spatial resolution 8-band WorldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
- Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef]
- Ise, T.; Minagawa, M.; Onishi, M. Classifying 3 moss species by deep learning using the “Chopped Picture” method. Open J. Ecol. 2018, 8, 166–173. [Google Scholar] [CrossRef]
- Watanabe, S.; Sumi, K.; Ise, T. Automatic vegetation identification in Google Earth images using a convolutional neural network: A case study for Japanese bamboo forests. BMC Ecol. 2018, 20, 65. [Google Scholar]
- Soya, K.; Aotake, S.; Ogata, H.; Ohya, J.; Ohtani, T.; Takanishi, A.; Funabashi, M. Study of a Method for Detecting Dominant Vegetation in a Field from RGB Images Using Deep Learning in Synecoculture Environment. In Proceedings of the 49th Annual Meeting of the Institute of Image Electronics Engineers of Japan, Online, 24–26 June 2021. [Google Scholar]
- Yoshizaki, R.; Aotake, S.; Ogata, H.; Ohya, J.; Ohtani, T.; Takanishi, A.; Funabashi, M. Study of a Method for Recognizing Field Covering Situation by Applying Semantic Segmentation to RGB Images in Synecoculture Environment. In Proceedings of the 49th Annual Meeting of the Institute of Image Electronics Engineers of Japan, Online, 24–26 June 2021. [Google Scholar]
- Tokoro, M. Open Systems Science: A Challenge to Open Systems Problems. Springer Proceedings in Complexity 2017. pp. 213–221.. Available online: https://synecoculture.sonycsl.co.jp/public/2017_CSDC_Tokoro.pdf (accessed on 2 December 2024).
- Funabashi, M.; Minami, T. Dynamical assessment of aboveground and underground biodiversity with supportive AI. Meas. Sens. 2021, 18, 100167. [Google Scholar] [CrossRef]
- Funabashi, M. Augmentation of Plant Genetic Diversity in Synecoculture: Theory and Practice in Temperate and Tropical Zones. Genet. Divers. Hortic. Plants Sustain. Dev. Biodivers. 2019, 22, 3–46. Available online: https://synecoculture.sonycsl.co.jp/public/20191110%20Augmentation%20of%20Plant%20Genetic%20Diversity%20in%20Synecoculture%20-Theory%20and%20Practice%20in%20Temperate%20and%20Tropical%20Zones%20Springer%20Nature%20Masa%20Funabashi.pdf (accessed on 2 December 2024).
- iNaturalist Homepage. Available online: https://www.inaturalist.org/ (accessed on 2 December 2024).
- SEED Biocomplexity Homepage. Available online: https://seed-index.com/ (accessed on 2 December 2024).
- Ohta, K.; Suzuki, G.; Miyazawa, K.; Funabashi, M. Open systems navigation based on system-level difference analysis—Case studies with urban augmented ecosystems. Meas. Sens. 2022, 23, 100401. [Google Scholar] [CrossRef]
- Funabashi, M. Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems. Entropy 2017, 19, 181. [Google Scholar] [CrossRef]
- Funabashi, M. Open Systems Exploration: An Example with Ecosystem Management. Springer Proceedings in Complexity. 2017. pp. 223–243. Available online: https://synecoculture.sonycsl.co.jp/public/2017_CSDC_Funabashi_OSE.pdf (accessed on 2 December 2024).
- Li, H.; Liu, H.; Zhou, J.; Wei, G.; Shi, S.; Zhang, X.; Zhang, R.; Zhu, H.; He, T. Development and First Results of a No-Till Pneumatic Sower for Maize Precise Sowing in Huang-Huai-Hai Plain of China. Agriculture 2021, 11, 1023. [Google Scholar] [CrossRef]
- Kumar, P.; Ashok, G. Design and fabrication of smart sow sowing robot. Mater. Today Proc. 2020, 39, 354–358. [Google Scholar] [CrossRef]
- Carlos, J.; Choque, M.; Erick, M.; Fiestas, S.; Ricardo, S.; Prado, G. Efficient implementation of a Cartesian Farmbot robot for agricultural applications in the region La Libertad-Peru. In Proceedings of the IEEE ANDESCON 2018, Santiago de Cali, Colombia, 22–24 August 2018; pp. 1–6. [Google Scholar]
- FarmDroid Document. Available online: https://farmdroid.com/wp-content/uploads/Brochure-FD20-2023-web.pdf (accessed on 2 December 2024).
- Sugiyama, S.; Osawa, K.; Mitani, K.; Itoh, A.; Kondo, T.; Morita, M.; Aotake, S.; Funabashi, M.; Otani, T.; Takanishi, A. Development of an Agricultural Operation Support Robot in a SynecocultureTM Farming Environment (Fourth Report: Development of a Tool-Changeable Cutting Tool for Pruning and Harvesting Multiple Crops). J. Robot. Soc. Jpn. 2022, 41, 889–892. [Google Scholar] [CrossRef]
- GeekBench6 Home Page. Available online: https://www.geekbench.com/ (accessed on 2 December 2024).
Data | Image 1 | Image 2 | Image 3 | |
---|---|---|---|---|
Number of sowings | Amateur | 42 | 0 | 78 |
Beginner | 40 | 19 | 31 | |
Intermediate | 30 | 24 | 27 | |
Advanced | 8 | 17 | 26 | |
Supervised-Advanced | 13 | 17 | 26 | |
Coverage [%] | 47.05 | 53.51 | 42.53 |
Model | Required Data | Features | Model Name | Train Data Score | Evaluation Data Score | Average Processing Time [s] |
---|---|---|---|---|---|---|
Model 1 | RGB Image | ISOM-AS | Linear regression | 0.288 | 0.266 | 17.1 ± 1.7 |
Model 2 | RGB Image | ISOM-AS, Coverage ratio, Grouped uncoverage ratio | Ridge regression | 0.351 | 0.293 | 18.7 ± 1.4 |
Model 3 | RGB Image and Depth Data | ISOM-AS, Coverage ratio, Grouped uncoverage ratio, Sensed height average | Ridge regression | 0.352 | 0.292 | 18.7 ± 2.4 |
Model | Sum Differences of Absolute Values of Local Area | Difference of All Areas | Integrated Loss Score | ||||||
---|---|---|---|---|---|---|---|---|---|
Image 1 | Image 2 | Image 3 | Average | Image 1 | Image 2 | Image 3 | Abs Average | ||
Amateur | 42 | 17 | 70 | 43 | 29 | −17 | 52 | 32.7 | 75.7 |
Beginner | 32 | 8 | 17 | 19 | 27 | 2 | 5 | 11.3 | 30.3 |
Intermediate | 24 | 11 | 13 | 16 | 17 | 7 | 1 | 8.3 | 24.3 |
Advanced | 5 | 0 | 0 | 1.7 | −5 | 0 | 0 | 1.7 | 3.3 |
Model 1 | 8 | 12 | 16 | 12 | 2 | 10 | 16 | 9.3 | 21.3 |
Model 2 | 8 | 12 | 17 | 12.3 | 2 | 10 | 13 | 8.3 | 20.7 |
Model 3 | 8 | 12 | 17 | 12.3 | 2 | 10 | 13 | 8.3 | 20.7 |
Method | Average Total Path Length | Average Total Control Time | |||||||
---|---|---|---|---|---|---|---|---|---|
XY-Space [m] | Z-Axis [m] | Total [m] | Reduction Rate | XY-Space [s] | Z-Axis [s] | Sowing Action [s] | Total [s] | Reduction Rate | |
Random &Height: 1.1 m | 7.4 ± 1.4 | 19.9 ± 5.8 | 27.3 ± 7.2 | Baseline for comparison | 8.5 ± 1.6 | 3323 ± 963 | 98.2 ± 29.0 | 3430 ± 994 | Baseline for comparison |
Greedy & Height: 1.1 m | 3.5 ± 0.5 | 19.9 ± 5.8 | 23.4 ± 6.3 | −14.3% | 4.5 ± 0.6 | 3323 ± 963 | 98.2 ± 29.0 | 3426 ± 993 | −0.12% |
2-Opt & Height: 1.1 m | 3.4 ± 0.1 | 19.9 ± 5.8 | 23.3 ± 5.9 | −14.7% | 4.4 ± 0.2 | 3323 ± 963 | 98.2 ± 29.0 | 3426 ± 993 | −0.12% |
2-Opt_RRP & Height: 1.1 m | 3.2 ± 0.4 | 19.9 ± 5.8 | 23.1 ± 6.2 | −15.4% | 4.1 ± 0.5 | 3323 ± 963 | 98.2 ± 29.0 | 3425 ± 992 | −0.15% |
2-Opt_RRP & Height: 0.8 m | 3.2 ± 0.4 | 14.0 ± 4.0 | 17.2 ± 4.4 | −37% | 4.1 ± 0.5 | 2340 ± 658 | 98.2 ± 29.0 | 2442 ± 688 | −29% |
2-Opt_RRP & Height: Highest | 3.2 ± 0.4 | 9.4 ± 6.6 | 12.6 ± 7.0 | −54% | 4.1 ± 0.5 | 1558 ± 1104 | 98.2 ± 29.0 | 1660 ± 1134 | −52% |
Method | Average Required Time [min] | Reduction Rate | |||||
---|---|---|---|---|---|---|---|
Sowing Position Estimation Process | Route Planning Process | XY-Space Control | Z-Axis Control | Sowing Control | Total Required Time | ||
Model 1 (Model 3) & Random & Height: 1.1 m | 0.28 ± 0.03 | 0.272 ± 0.003 | 0.14 ± 0.03 | 55 ± 16 | 1.6 ± 0.5 | 57 ± 17 | Baseline for comparison |
Model 2 (Model 3) & Greedy & Height: 0.8 m | 0.31 ± 0.02 | 0.267 ± 0.007 | 0.075 ± 0.01 | 39 ± 11 | 1.6 ± 0.5 | 41 ± 12 | −28% |
Model 3 & 2-Opt_RRP & Height: Highest | 0.31 ± 0.04 | 0.274 ± 0.004 | 0.068 ± 0.01 | 26 ± 18 | 1.6 ± 0.5 | 28 ± 19 | −51% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Aotake, S.; Otani, T.; Funabashi, M.; Takanishi, A. Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge. Agriculture 2025, 15, 1536. https://doi.org/10.3390/agriculture15141536
Aotake S, Otani T, Funabashi M, Takanishi A. Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge. Agriculture. 2025; 15(14):1536. https://doi.org/10.3390/agriculture15141536
Chicago/Turabian StyleAotake, Shuntaro, Takuya Otani, Masatoshi Funabashi, and Atsuo Takanishi. 2025. "Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge" Agriculture 15, no. 14: 1536. https://doi.org/10.3390/agriculture15141536
APA StyleAotake, S., Otani, T., Funabashi, M., & Takanishi, A. (2025). Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge. Agriculture, 15(14), 1536. https://doi.org/10.3390/agriculture15141536