Next Article in Journal
Transcriptomics and Metabolomics Reveal the Dwarfing Mechanism of Pepper Plants Under Ultraviolet Radiation
Previous Article in Journal
Is the Cultivation of Dictyophora indusiata with Grass-Based Substrates an Efficacious and Sustainable Approach for Enhancing the Understory Soil Environment?
Previous Article in Special Issue
Research Progress on Agricultural Equipments for Precision Planting and Harvesting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge

1
Sony Computer Science Laboratories, Inc., Tokyo 141-0022, Japan
2
Faculty of Science and Engineering, Waseda University, Tokyo 169-8555, Japan
3
Department of Systems Science and Engineering, Shibaura Institute of Technology, Tokyo 135-8548, Japan
4
Center for Social Common Capital Beyond 2050, Kyoto University, Kyoto 606-8501, Japan
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(14), 1536; https://doi.org/10.3390/agriculture15141536
Submission received: 14 May 2025 / Revised: 10 July 2025 / Accepted: 14 July 2025 / Published: 16 July 2025

Abstract

We propose a data-efficient framework for automating sowing operations by agricultural robots in densely mixed polyculture environments. This study addresses the challenge of enabling robots to identify suitable sowing positions with minimal labeled data by integrating image-based field sensing with expert agricultural knowledge. We collected 84 RGB-depth images from seven field sites, labeled by synecological farming practitioners of varying proficiency levels, and trained a regression model to estimate optimal sowing positions and seeding quantities. The model’s predictions were comparable to those of intermediate-to-advanced practitioners across diverse field conditions. To implement this estimation in practice, we mounted a Kinect v2 sensor on a robot arm and integrated its 3D spatial data with axis-specific movement control. We then applied a trajectory optimization algorithm based on the traveling salesman problem to generate efficient sowing paths. Simulated trials incorporating both computation and robotic control times showed that our method reduced sowing operation time by 51% compared to random planning. These findings highlight the potential of interpretable, low-data machine learning models for rapid adaptation to complex agroecological systems and demonstrate a practical approach to combining structured human expertise with sensor-based automation in biodiverse farming environments.

1. Introduction

In recent years, the destruction of the environment and ecosystems by conventional agricultural methods has become increasingly severe, raising questions about the sustainability of primary industries and food production in terms of maintaining material and energy resources, human health, and ecosystem health [1]. Against this background, an agricultural method called synecological farming (Synecoculture™) is attracting attention, which aims to transcend the trade-off between productivity and environmental destruction [2,3,4,5]. Synecoculture is a form of agroecology that builds and utilizes a high degree of biodiversity, with more than 200 species of useful plants mixed and densely planted in a small area of about 1000 square meters to create a highly diverse ecosystem. This farming method does not involve any cultivation, fertilization, or pesticides, and food production is achieved by creating an ecosystem with a highly enhanced biodiversity. Since the small ridges for production are densely planted with a mixture of several species of plants and fruit trees with different growth rates, sowing diverse plant seeds/seedlings, pruning of dominant plants, and harvesting must always be performed.
This can be a challenge in the context of machine learning because it requires an in-depth understanding of the ecosystem, which has proven to be a difficult problem as sufficient recognition methods have not yet been established. Therefore, an image recognition model called the Integrated Inter-Subjective Objective Model (ISOM) has recently been developed by integrating human subjective evaluation and objective image features to determine vegetation conditions from experiential knowledge [6]. The ISOM provides a clue to the ecological status of an ecosystem by scoring it subjectively, but to detect the appropriate sowing area and positions, it is necessary to determine the state of vegetation formed by a wide variety of plants of different growth stages and to predict how these plants grow and how the vegetation will change in the future. This problem requires judgment based on long-term experience. For example, to predict how the vegetation grows and changes, consideration must be given to the changing sunlight conditions and the trade-off between over-sowing/under-sowing. Over-sowing results in the total loss of plants, while under-sowing prevents the formation of a diversity of plant species. However, no computational methods have yet been developed for use in a mixed polyculture environment, which is fundamentally different from conventional agriculture.
Recently, models that learn from big data have been emerging, but it is difficult to infer stable and accurate results for ecosystems that are constantly changing and that vary widely in variety, such as differences due to various climate zones, soil conditions, communities, vegetation conditions, etc. On the other hand, the strength of few-shot learning is that it allows people to flexibly adapt to a variety of environments by providing a little labeling for the situation on the spot. Furthermore, while data labeling by humans has the advantage of being able to extract the value of their experience, it has the disadvantage of being inconvenient to the person because the labeling process is time-consuming. Furthermore, the number of proficient persons is not large to begin with and is not expected to increase enough in the near future. In short, it is difficult to acquire a huge amount of data, and effective learning is required from a small amount of labeling data.
Due to the shortage of qualified practitioners, robots that support synecological farming are currently being developed [7]. However, while human-operated sowing operations have been achieved, automatic sowing has not yet been realized.
In this paper, we tackle the problem through the following five steps:
  • We asked experienced practitioners of synecological farming to perform data labeling for sowing instructions and analyzed the trends of the quantity/location of sowing to construct an ecosystem.
  • Based on the analysis results, we created a framework for sowing evaluation and designed features for understanding the vegetation conditions, including ISOM.
  • By calculating identified key features using image processing technology, we trained a model that predicts the sowing quantity/positions.
  • We evaluated the model using data labeled by the experienced practitioners.
  • We then utilized the output results to evaluate the simulation of an automatic sowing operation using a robot and sowing mechanism [7,8] developed for a densely mixed polyculture environment.
Despite the importance of identifying and monitoring the distribution of plant species and groups of species in the context of understanding ecosystems, this process is inherently costly in terms of time and money and often results in spatial bias in the amount of survey effort (data bias) when surveying a large area. As a solution to these issues, the automatic identification of species by machines based on images and other information instead of humans has been considered [9,10,11,12,13]. However, these studies have typically been conducted only on plant parts (leaves, etc.) photographed in ideal conditions with a white background or on plants placed indoors that are not affected by the lighting environment. Indeed, due to the high degree of difficulty involved, not many studies have been performed on plants that exist outdoors, where the lighting environment changes drastically and the background information shifts. One of the few such studies attempted to detect plant regions from images by adapting to outdoor daylight conditions [14], and another approach tried to identify plants in the field by a model trained with a mixed image dataset that is taken indoors and outdoors [15]. Another approach applied identification techniques from museum plant specimens to outdoor methods of plant species identification [16]. There is also a method that identifies plants by integrating features from multiple periods of images [17]. However, none of these methods have been proven to perform well enough.
In recent years, deep learning has seen remarkable development, and CNNs used for image recognition have achieved high accuracy in plant identification and pathological diagnosis [18,19,20]. On the other hand, although the recognition process of CNNs has been partially elucidated and explanations of recognition results have become possible to some extent, what is happening internally remains a black box, and it is still difficult to explain what is being referenced in the recognition process. As a result, there is little use of human analysis in the labeling process, and human labeling tasks are limited to enhancing AI. Furthermore, despite various efficiency improvements through technological advancements, the computational requirements for inference remain significant, limiting the use of CNNs in scenarios with constrained computational resources. While there are approaches that involve transmitting data to the cloud for recognition, there is the challenge of having to perform computations using on-site computational resources in areas where wireless communication is unavailable. Finally, there is the challenge of the amount of data required to achieve stable performance. Using CNNs is difficult when sufficient data cannot be secured, and particularly complex subjects, such as ecological vegetation, tend to require even more data.
In terms of classifying vegetation using remote sensing, some approaches have tried to classify land cover and vegetation using data taken from the sky by aircraft, satellites, and drones, as well as tree identification and even distance images obtained using hyperspectral cameras and LiDAR [21,22,23,24,25,26,27,28,29,30,31]. However, because the categories of recognition targets are huge and their recognition difficulty is very high, data labeling for model training requires a huge amount of power to achieve sufficient performance [32]. Therefore, the chopped picture method has been proposed as an approach to reduce the effort required to annotate training data for irregularly shaped objects and to perform training and achieve identification efficiently [33,34]. This method has been incorporated into image classification models to detect ectoparasites that prevent the growth of useful plants and reduce diversity in a field environment with dense mixed vegetation [35]. It has also been incorporated into a semantic segmentation method to detect vegetation cover and exposed topsoil in a field environment with dense mixed vegetation [36]. While these have achieved the ability to operate in a dense, mixed-vegetation environment, they are still insufficient as a model for monitoring ecosystems.
One reason for the various difficulties in the above attempts is the necessity of finding key indicators for monitoring and managing ecosystems. These ecosystems are characterized as open and complex systems that are difficult to manage effectively with a few limited indicators because of the variety of elements and complex interactions that play an important functional role [37]. Diverse ecosystem functions are supported by biodiversity consisting of at least three levels—genetic, species, and ecosystem—but it is difficult to establish uniform information criteria to measure and manage all these interactions in the presence of environmental variability [38,39].
Image analysis, such as remote sensing, and in-situ image recording, such as citizen science, have the potential to perceive diverse aspects of ecosystems and to document species diversity by identifying species based on the subjective human evaluation of species photographs [40]. There is a method that uses ecosystem assessment and machine learning to combine remote sensing with multiple data on biodiversity and functionality to compute an integrated index of complexity [41]. However, it is still insufficient to predict and manage small-scale ecosystems with frequent human intervention for diverse purposes due to difficulties with high dynamic variability and regional specificity, which are typical situations in agroecology [42].
Therefore, recent research has focused on the highly internalized empirical knowledge that humans possess and has successfully extracted indicators that can be utilized to promote biodiversity based on human assessments of biodiversity and sensor measurements of soil composition that are independent of human assessments [42]. A method for building an effective and reproducible management model to achieve ecological enhancement by integrating objective indicators based on subjective evaluation and image analysis has been proposed, and a model was developed to support management in synecological farming in which field conditions are scored by image data [6].
Promoting human ingenuity toward the integration of highly internalized empirical knowledge with scientific objectivity, as in these efforts, has led to the development of methodologies that provide an effective foundation for managing open complex systems [43,44]. However, models that can be integrated into supportive applications and robotics, which involve specific tasks such as sowing operations, have not yet been established.
In previous research on sowing robotics, many machines and robots have been developed that can plant a wide variety of seeds. For example, there is heavy machinery with a mechanism that mechanically plants many seeds by making the soil conditions suitable for sowing by driving [45]. However, it overly perturbs the soil and is not suitable for use in densely mixed polyculture fields. Other methods perform more delicate sowing, such as those that grab seeds prepared in their enclosure and push them into the soil [46], those that use a vacuum pump and nozzle to suction seeds and insert them into farmland for seed planting [47], and those that open and close a valve to drop seeds in a container to the ground for sowing [48]. Although these systems allow for sensitive sowing control and can plant seeds without destroying existing vegetation, they all have the problem that it is impossible to plant one seed at a time at arbitrary locations and the various crop species required in synecological farming.

2. Materials and Methods

2.1. Operating Robot

In this study, we conducted experiments using an agricultural robot (Figure 1a) [7] developed by the Takanishi Laboratory at Waseda University for use in a synecological farming environment. The robot is configured to move within the agricultural field and is designed to execute tasks in a densely mixed polyculture environment containing multiple plants, including tall plants, by installing a traveling unit on a small rail, which is self-propelled to facilitate the introduction to the farm, and then simply increasing the length of the rail. Since the robot needs to approach objects while avoiding dense vegetation to perform its work, it is given translational degrees of freedom (DoF) in the X, Y, and Z directions and rotational DoF in the roll, pitch, and yaw directions. The robot uses rail movement in the X-direction as it is. As shown in Figure 1b, the robot’s degrees of freedom consist of a traveling unit that runs on the soil, a rail traveling unit that assists horizontal movement, and a vertical arm unit that assists vertical movement and position change of the work tool. The vertical arm unit has a telescopic structure, and the work tools related to each task are installed at the end of the vertical arm unit, thus enabling multiple tasks to be performed simultaneously. Currently, we developed pruning, harvesting, and sowing interfaces as task tools.
Since this robot is designed to be assembled/disassembled during transportation, the assembly should be simple, and since it is operated outdoors, it must be environmentally resistant. Therefore, the linear motion of the rail-moving unit in the X and Y directions is performed by pin gears and wheels (Figure 2). The mechanism in Figure 2b is positioned in such a way that it moves through the frame field extending in the lower right direction inside Figure 2a. The size is determined by considering the width of the work line, the conditions that prevent the robot from tipping over during rapid acceleration and stopping, the wheelbase that allows it to turn, and the tread width. The control speeds are 45 m/min for the X-axis, 30 m/min for the Y-axis, and 0.36 m/min for the Z-axis.
The seed-planting mechanism [8] is shown in Figure 3a. It consists of a “tank” for storing seed balls, an “ejection mechanism” for removing seed balls one by one from the tank, and a “drilling mechanism” for drilling holes in the ground and feeding the seed balls into the holes. A tool changer mechanism [49] is utilized to automate the tool replacement. Tool changer connectors are attached to the seed-planting mechanism integrated with the tank and extending to the end of the arm. The pointed tip is used to drill a hole in the ground, and seed balls are placed in the hole for sowing. Holes are drilled in the ground by moving the tip up and down, and the tip is designed to store the seed balls and then release them by opening and closing the seed ball passageway. The ejection mechanism is designed so that when the drilling mechanism pushes the seed balls into the ground, the seed balls are ejected from the bottom (Figure 3b).
In the synecological farming method, no fertilizers or pesticides are used. Therefore, we chose red soil free of fertilizers and pesticides for our experiments. The soil is crushed and sifted with a back-scraper to make the grains finer, which increases the adhesive strength of the soil and improves its strength. The seeds are then molded with soil mixed at a weight ratio of water/soil = 1:2, and the seeds are allowed to be stored as seed balls without germinating by naturally drying them to remove moisture. The seed ball manufacturing process involves manually setting the soil and seeds into the manufacturing machine, with the compression process performed entirely by the machine. The manufacturing process is shown in Figure 4a, and the manufacturing machine is shown in Figure 4b. The process begins by pressing soil into the lower hemisphere. Next, seeds and soil are manually set on top of the pressed soil. The set soil is then compressed into a slightly larger hemisphere and molded into seed balls. Finally, the seed balls are removed from the mold using a mesh pull and transported. When pressing soil into the lower mold, the soil is compressed by the mechanism, ensuring its strength. When setting soil into the manufacturing machine, a hollow container with a diameter of 13 mm and a height of 8 mm is used. This is to keep the amount of soil consistent and reduce burrs on the seed balls caused by excess soil. The radius of the seed ball is 6.5 mm, and the height was determined to be 8 mm, based on experiments, to ensure the soil does not collapse after setting. Additionally, the diameter of the hollow container is 13 mm to prevent cracking of the seed balls due to insufficient soil or enlargement of burrs due to excessive soil. The seed balls are generated with a radius of 6.5 mm (Figure 4c).

2.2. Data Acquisition and Processing Operation Verification Environment

The data collection for this experiment was conducted in a synecological farming experimental plot constructed within a solar panel facility in Miyagi Prefecture, Japan. The field was constructed under solar panels, and a variety of plants were introduced into the central ridged area where the synecological farming method was implemented. The robot ran across these ridges to collect data on the field. Kinect v2 (Redmond, WA, USA), which acquires the data, is installed in front of the robot (Figure 5) and surrounded by a waterproof enclosure so that it can take pictures even in the rain. The Kinect v2 specifications include the RGB image resolution of 1920 × 1080 pixels, depth resolution of 512 × 424 pixels, depth perception range of 500–8000 mm, horizontal viewing angle of 70 degrees, and vertical viewing angle of 60 degrees. Examples of the RGB images and 3D data taken by the mounted Kinect v2 are provided in Figure 6. The 3D data acquired in real space is approximately 1.3 m long and 1.8 m wide. For the cropped image, we used the central 790 × 1080 pixel area, which is the area where the mixed dense vegetation was constructed.
Kinect v2 was installed parallel to the ground at a height of 1.4 m. We first collected the data prior to determining the sowing mechanism and its sensor; then we proceeded with the verification of the map, assuming that the sensor could be installed at the specified part of the sowing interface. The height of the sensor for sowing is assumed to be 1.4 m, and as for the positional relationship between the assumed sensor installation area and the tip of the sowing interface, the X-axis is 0 cm from the sensor, the Y-axis is 13 cm opposite to the direction of travel, and the Z-axis is 14 cm to the ground. These parameters were used in the path planning and control simulations.
The 2D map is created by reading the 3D point cloud data and converting it into 2D. Kinect v2 performs depth sensing by using a time-of-flight method that measures distances by emitting infrared rays and receiving the reflected light. However, this process results in many missing areas in the acquired 3D data, which reduces the accuracy of the sowing position estimation and sowing route planning and prevents normal operation. Therefore, to minimize the number of missing areas, the same area as the cropped image is extracted from the 3D data and scaled to 790 × 1080 space. Then, the 3D data is extracted for each 20 × 20 space within that space, the average of the Z-values is calculated, and those values are assigned to the 20 × 20 pixels in the image space corresponding to that region (Figure 7). This reduces the resolution of the Z-values in the 3D space, but at the same time, the number of missing areas is reduced, which we consider an acceptable trade-off.
The specifications of the computing environment utilized in this study are as follows.
  • OS: Windows 10
  • CPU: 12th Gen Intel® Core™ i7-1265U 1.80 GHz
  • RAM: 32.0 GB
  • CPU Geekbench 6 benchmark with a score of 2154 points (one CPU core) and 4637 points (all CPU cores) [50].

2.3. Dataset

The images 1–7 shown in Figure 8 are cutouts of the central 790 × 1080 area from the RGB image acquired by Kinect v2, where the synecological farming method is practiced and vegetation is present. These images were taken on 3 December 2020. Images 1, 2, and 3 were used as test data for the analysis of sowing trends and for evaluating the model for estimating the number of sowings in the split areas. For the analysis of sowing trends, four individuals of four proficiency levels (from novice to advanced) were asked to perform labeling, and those data were used for the analysis. Images 4–7 were used as training and evaluation data for training the model to estimate the number of sowings in the split area. The number/position of sowings on these data was labeled by an intermediate-level equivalent who received advice from an expert and gained knowledge from the supervised-advanced labeled data. The length of the (x, y) space (in cm) in the real space of images 1–7 is as follows: (90, 130), (89, 132), (93, 129), (95, 129), (110, 143), (104, 134), and (110, 131). The average length of the (x, y) space of test data images 1–3 is (91, 130), and the average length of the (x, y) space of all data for images 1–7 is (99, 133). This slight difference in (x, y) space length is presumably because the ground on which the four-wheeled robot is installed is uneven. However, manual checks and level adjustments were performed to guarantee the 3D data is level with the ground.
For the sowing evaluation, we asked four individuals of different proficiency levels in synecological farming methods (listed below) to perform labeling by specifying the sowing position for three images of a mixed polyculture field.
  • Amateur: A person who has a basic understanding of the synecological farming method but has never practiced it.
  • Beginner: A person engaged in synecological farming for about one year. He/she has no academic background in agronomy.
  • Intermediate: A person engaged in synecological farming for more than two years. His/her academic background is in agronomy or biology, and his/her research topics relate to agronomy and synecological farming.
  • Advanced: A person engaged in synecological farming for more than four years and who is in a synecological farming organization, frequently manages and operates densely mixed polyculture fields, and is familiar with the synecological farming method. As for research, he/she previously majored in biology and is currently researching synecological farming.
  • Supervised-Advanced: Among the data for which the advanced proficiency level person performed sowing position labeling, some data should have been labeled as sowing position but was not due to a recognition mistake. Specifically, in the upper-right-center area of Image 1, it was mistakenly thought that the area’s topsoil was covered with plantation even though it was not. When questioned, this person responded that if the area was not covered, then it should have been labeled as a sowing area. We therefore supplemented this area by referring to and integrating the labeling data of Intermediate, which had been properly designated for sowing.
The following is a brief description of Images 1–3. Image 1 captures vegetation characterized as scenes with the least amount of topsoil coverage by plantation and the fewest growing plants. Image 2 is a scene with the greatest number of plants that have grown to seedling size and high cover. Image 3 is a scene with a moderate amount of topsoil plantation coverage and the presence of growing plants.

3. Results

3.1. Analysis of Topsoil Plantation Coverage Area Detection Process

First, we developed an improved topsoil plantation coverage area detection process based on the covered area detection process used in ISOM [6] and evaluated its performance. The coverage detection utilized in ISOM typically achieves a high recall value but a poor precision value. Therefore, by extending the reference space from HSV to RGB space, we created a detection process that can achieve high precision while maintaining high recall (Figure 9).
In this section, the performance of the coverage area detection process is classified into four areas. These are used to calculate a set of indicators to implement the process. The evaluation indices are precision, recall, IOU (intersection over union), F-measure (harmonic mean of precision and recall of coverage detection), accuracy, negative predictive value, specificity, and balanced accuracy, which are respectively calculated by Equations (1)–(8).
  • TP: True positive is the number correctly classified as positive by the prediction model. It is the number of cases that were predicted to be true where they were actually true.
  • TN: True negative is the number of cases correctly classified as negative by the predictive model. It is the number of cases that were predicted to be false when they were actually false.
  • FP: False positive is the number of predictions that were missed, i.e., predicted to be true but were false.
  • FN: False negative is the number of predictions that were missed, i.e., predicted to be false but were true.
R e c a l l = T P T P + F N
P r e c i s i o n = T P T P + F P
I O U = T P T P + F P + F N
F m e a s u r e = 2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n
A c c u r a c y = T P + T N T P + T N + F P + F N
N e g a t i v e   P r e d i c t i v e   V a l u e = T N T N + F N
S p e c i f i c i t y = T N T N + F P
B a l a n c e d   A c c u r a c y = R e c a l l + S p e c i f i c i t y 2
The evaluation results of the topsoil plantation coverage area detection process for Images 1–7 were as follows: recall = 0.993, precision = 0.942, IOU = 0.936, F-measure = 0.966, accuracy = 0.970, negative predictive value = 0.994, specificity = 0.952, and balanced accuracy = 0.972.
As all the evaluation indices achieved scores over 0.9, we can conclude that this coverage area detection process and the predicted topsoil plantation coverage rate are adequate to perceive the coverage status of the target.

3.2. Analysis of Human-Labeled Sowing Data by Proficiency Level

Table 1 lists the results for the three labeled sowing images at each proficiency level, as well as the coverage calculated by the coverage detection process. The coverage is the value calculated from the number of pixels in the topsoil plantation coverage area detected by the coverage detection process.
Regarding the analysis, Images 1 and 3 show similar trends, with those who are less proficient trying to introduce many seed balls depending on the state of coverage and those who are more proficient determining the position and number of seeds to be introduced in anticipation of the vegetation that forms after the sprouted plants have grown. With Image 3, the amateur was inclined to introduce many seed balls due to the lack of visible plantation-covered topsoil, while those engaged in synecological farming showed somewhat closer values for the number of introductions required for this scene.
For Image 2, the amateur and novice participants judged that there are few places where they can introduce seed balls. However, the intermediate and advanced participants considered the situation after each plant had grown and took this into account when introducing their appropriate seed balls. Specifically, they tended to sow under or around vegetation that had grown to a certain size, as vegetation that has grown to a certain size deprives the vegetation below and around it of light, which often reduces the momentum of growth.
The following three points are the features we extracted by analyzing the sowing trends by proficiency grades:
  • Amateurs tended to sow only in the area with no coverage of plantation and cohesive areas, so many areas that should have been sown were not. In contrast, beginner/intermediate/advanced practitioners, who are experienced in synecological farming, tended to designate sowing areas that the amateur overlooked and covered the entire area well. However, there were some cases of mistaken or overlooked areas that needed sowing, such as the areas that were recognized as covered but were not; this occurred even for the advanced practitioners.
  • The number of sows tended to be either too much or too little in amateur/beginner, while in intermediate/advanced, it tended to be appropriate. Specifically, amateurs/beginners tended to be oversensitive to uncovered areas and over-sowed. On the other hand, the intermediate/advanced group tended to perform an appropriate number of sows even in uncovered areas, depending on the surrounding vegetation and the cohesiveness of the uncovered area, and tended to specify sowing according to the vegetation conditions even when the uncovered area was small.
  • The amateur/beginner group was not able to designate sowing based on the current growth status of vegetation and topsoil plantation coverage after time had elapsed, and it was strongly biased by visual information on the extent of the uncovered areas. In contrast, the intermediate/advanced group specified sowing in consideration of both the current vegetation growth and the future topsoil plantation coverage conditions.
As a consideration of the above trends, regarding the 1st trend, the accuracy of sowing positions varied among amateur, beginner, and intermediate, and although there was some improvement with increasing proficiency, the accuracy was not as high as that of advanced. In contrast, the accuracy of advanced practitioners was sophisticated and balanced, as the advanced practitioner had learned through long experience with synecological farming how the vegetation competes from germination to growth, how much of it survives, when and how much of it needs to be sown, and how much of it grows. In addition, the experience of how the vegetation forms shade as it grows was kept in mind when deciding on sowing positions. Regarding the 2nd trend, if plants germinated from seeds are too densely packed together, many of them compete and crush each other, and most of them die; however, if the introduced plants are not enough, densely mixed polyculture vegetation will not be created. Therefore, a trade-off must be considered in terms of the number of sowings. The intermediate/advanced group made sowing decisions based on the number of newly germinated plants and their densities. Regarding the 3rd trend, the area directly under a plant that has grown to a certain size tends to lose out to other plants in the process of growth, eventually dying, and the area directly under and around a plant that has grown to a certain extent tends to lose vegetation due to a lack of sunlight. Even if the area is small, it can be carefully designated for sowing. In addition, the sowing positions were specified with the intention that sowing in an environment with reduced sunlight would enable appropriate growth and be effective in promoting vegetation growth.

3.3. Sowing Position Prediction Process

3.3.1. Process Flow of Sowing Position Prediction Process

Figure 10 shows the process flow for estimating the sowing position, where the boxes with square, gray-colored corners indicate data and boxes with rounded, uncolored corners indicate processing. The input data consists of RGB images and 3D data. We divided the RGB image into 4 × 3 segments (four areas vertically and three areas horizontally), and each segmented area corresponds to 263 × 270 pixels in the image, with about 33.5 cm × 35 cm as the average in real space. Using the divided RGB images, we calculated features 1–3 listed below. The 3D data was then converted into 2D data by the conversion process described in Section 2.3, and we calculated feature 4. Based on the results of the analysis of the subjective sowing data trends, we developed the following features to be used in the sowing number estimation model, which estimates how many sowings are appropriate for each divided area.
  • ISOM-AS: This word is an abbreviation for Integrated Inter-Subjective Objective Model—Appraisal Score. The model is trained to predict appraisal scores by utilizing the ISOM framework and performs the subjective evaluation of how good the field is in terms of synecological farming [6]. The original model used 241-dimensional features, which were a combination of 23-dimensional features using AMeDAS information (meteorological information) and 218-dimensional features extracted from images. In our study, the model was modified to use only 218-dimensional features extracted from images, excluding AMeDAS information so that it can be used universally even in cases where AMeDAS information cannot be obtained. The features can be summarized as follows: First, various color-related information (e.g., average value of red, standard deviation value of blue, etc.) and edge-related information are extracted from the entire segmented image in 95 dimensions. These include information showing the relationship between each RGB color (e.g., green and blue covariance, red mean–blue mean, etc.), information related to the HSV space, and information output using the Gray-Level Co-occurrence Matrix (GLCM) related to texture. Next, the same 95-dimensional features are extracted from the areas cut out by the coverage area detection process. Finally, information related to the coverage rate estimated by the coverage area detection process is obtained, and the coverage rate of each segmented image is reintegrated into its pre-segmentation state, captured as a coverage rate matrix, and information extracted by applying GLCM to it is obtained as 28-dimensional features. These features are used as 218-dimensional features. For details, please refer to the Supplementary Material.
  • Coverage Ratio: This is one of the original features used in ISOM-AS, which we upgraded for our study. It indicates the ratio of the topsoil plantation coverage area obtained by applying the covered area detection process to the entire image.
  • Uncoverage Ratio: The reason for creating this feature was that people with high proficiency made a conscious decision not to sow in areas with many freshly germinated plants. Specifically, we designed this feature to output a higher coverage ratio when there were areas with many freshly germinated plants, which means it shows a difference from feature 2 and can recognize the presence of freshly germinated plants and how they are distributed. The specific process function is as follows. If we use feature 2 when there is an area with many newly germinated plants, the area is recognized as a mosaic-like area with a plantation-covered area and a bare area. Therefore, to exclude this area from the sowing target, the area is output as a covered area by adopting the “Closing Process” image processing. The uncovered area/rate is then calculated by calculating the uncovered area from the covered area using a reversed image processing function. The “Closing Process” is a series of processes that apply to binarized image data. It first expands its area by specified kernel size pixels and then shrinks by the same kernel size pixels. This process is useful for areas that need to be connected without changing the total size of the area too much. Here, we set the size of the kernel applied during the expansion and contraction processes to 5 × 5 pixels, and the number of iterations, which accounts for the number of iterations of the expansion and contraction process, is set to 2.
  • Sensed Height Average: This feature represents the average of the height of the divided area of the height map. Since we assume the field’s topsoil is flat, if the height information is greater than that of the soil, it indicates the presence of large growing plants. Practitioners with a higher proficiency grade tend to sow differently by height depending on the extent and number of growing plants, so we designed and introduced this feature to help perceive that area.
The clustering process is applied to each divided area to estimate the sowing location. We do not want to destroy the existing plantation, so we extract the non-plantation area (Figure 11d) and set it as the sowing candidate area. Next, we adopt the K-means clustering-based area division method with the predicted number of sowings by the model, use it as the clustering number, and obtain the results of division and the position of the sowing that is the center of each divided area (Figure 11e). After estimating the sowing position for each segmented area using the process in Figure 11, the estimated sowing position in the original image is calculated by integrating the divided areas (Figure 12). The magnified images of the sowing position of each model are shown in Supplementary Materials.

3.3.2. Evaluation of Sowing Number Estimator Model

Correctly estimating the number of sowings for each of the divided areas is crucial. To determine the appropriate number of sowings, it is necessary to have some knowledge derived from the experience of how plants form stems/leaves and grow above ground after germination, how they form roots and grow underground, and what kind of vegetation they form. If plants are introduced in the appropriate quantities and densities, they form a state of exquisite symbiosis, where each of them complements and enhances the others to form part of a strong, augmented ecosystem. However, if the number of introductions is too large or too dense, individual plants may compete both above and below ground and attack each other, resulting in partial or, in the worst case, total extinction. On the other hand, if the number of introductions is too low and the density is sparse, each plant only grows on its own and does not move into a symbiotic state. In addition, the number of sowings is determined by considering environmental changes, such as the shadows formed by the growing plants. Therefore, it is necessary to maintain a very delicate balance of trade-offs, as well as having the ability to predict future vegetation conditions over the long term and experience in dealing with a vast amount of vegetation.
The dataset used for this evaluation is the one introduced in Section 2.3 For training/evaluation data, we use four images from Images 4–8 and divide them into 12 images by quadrating vertically and trisecting horizontally, generating a total of 48 divided images. We used 38 images for training data and ten for evaluation, making the ratio between training data and evaluation data 8:2. For test data, we used three images from Images 1–3 and divided them into 12 images by quadrating vertically and trisecting horizontally, resulting in a total of 36 images. Their performance was compared and evaluated with the sowing specification data of the practitioners at each proficiency grade and with several learned models.
We evaluated the following five regression models as training models and selected the model with the best performance for each feature: Linear Regressor, Ridge Regressor, Decision Tree Regressor, Random Forest Regressor, and Gradient Boosting Regressor. The reason we chose the classical regression model is that it can deliver a reasonable inference performance even with a small amount of training data. Since the training data is minimal, there is a high risk of overfitting; therefore, we defined the model as overfitting if the performance difference between the learning score and the test score was 0.13 or higher.
Verification was performed by comparing the performance of the above five models when trained with the following three types of feature sets. The Decision Tree Regressor and Gradient Boosting Regressor models showed very strong overfitting results, with train scores of 0.99~1.0 and evaluation scores of −0.43~0.15, so they were excluded from consideration. The Random Forest model was relatively free of overfitting; instead, the difference between the train score and evaluation score was around 0.14 to 0.58; therefore, we judged it to be overfitting and excluded it from consideration. Of the remaining—Linear Regressor and Ridge Regressor—the one with higher performance in learning for each feature was adopted.
Model 1 used only the ISOM-AS results as input features, while Model 2 used both the ISOM-AS and the coverage ratio and grouped uncoverage ratio information, which was highlighted as important information from the subjectively labeled sowing data analysis trends. Model 3 used not only RGB images but also 2D height data as input data and added the average height of the target area to the features.
Table 2 lists the parameters utilized during training, which include the data required as input data, the features used, the regression model name, the training data score, the evaluation data score, the average total processing time required to estimate the number of sowings using each model, and the K-means clustering process for each image. As we can see, the train data score and evaluation score for Model 1 were 0.288 and 0.266. Model 2 had improved both the training and evaluation scores to 0.351 and 0.293. This shows that using only “ISOM-AS” as a feature is not enough, and the features “Coverage ratio” and “Grouped uncoverage ratio” were valid information for sowing. Model 3, which added the feature “Sensed height average”, showed just a little improvement on training and evaluation scores. This shows that the added feature was not valid information, but because the added feature was compressed to one value from height data, the average, it could be valid information if we could extract more detailed features from height data. Model 3 requires depth data, which is 2D height data, in addition to RGB images, while Models 1 and 2 require only RGB images as input. Therefore, it is advisable to use different models depending on the application and environment. For example, if the computing power of the machine mounted on the robot is limited and processing needs to be completed in as short a time as possible, or if the robot has only RGB image sensors, it is better to use Model 1 or 2. Model 3 is recommended when the robot is equipped with sufficient machine capacity or when processing is performed in the cloud, which has abundant computing power. However, in this experiment, we measured the processing time with RGB data and depth data already available, and under these conditions, there was almost no increase in processing time due to the addition of depth data.
We evaluated the test data by comparing the number of sowings per local area and the total number of sowings in the entire image using the supervised-advanced sowing data as the master data. The comparisons were made using all subjectively labeled data with the proficiency grade from amateur to advanced and the learned sowing number estimation Models 1–3. For the comparison of each local area, the images were divided into 12 sections (four areas horizontally and three areas vertically), and the number of sowings in each area was compared. Figure 13 shows an example of the results of the comparison in Image 1. Red areas indicate a higher number of sowings than in the supervised-advanced data, where the darker the color, the greater the difference, and areas in blue indicate that the number of sowings was less than the supervised-advanced data, where the darker the color, the greater the difference. The white color indicates that the number of sowings is the same as in the supervised-advanced data. The results in Figure 13 show that practitioners with a lower proficiency level than advanced all tend to indicate an excessive number of sowings. In contrast, the trained models showed fewer areas that deviated significantly from the supervised-advanced data, and many white areas were observed.
Next, Table 3 summarizes the results of the comparison of the number of sowing specifications on each local area and the entire image for the test data (Images 1–3), with four human-labeled data and three trained model predicting data. The comparison for local areas is carried out by comparing the number of sowings specified for each segmented area, and if there is a difference compared to the supervised-advanced data, the absolute value of that value is recorded as the sum of the absolute values of the differences for all segmented areas in a single image, which is shown in the table as the “Sum differences of absolute values of local area”. Next, to compare the total number of sowing designations for the entire image, we subtract the total number of sowing designations in the entire image indicated by supervised-advanced, which is shown as the “Difference of all areas”. When calculating the average of the “Difference of all areas” results of Images 1–3, the results were transformed to an absolute value, and the average was then calculated. Next, the “Integrated loss score” in the table is the loss score that combines the comparison results for each local area and the entire image into one and is calculated by simply summing the average value of the “Sum difference of the absolute value of the local area” and the average value of the “Sum difference of absolute values of all areas”. On this loss score, lower scores mean a higher performance.
Looking at the results, the integrated loss score improved as the proficiency level increased from amateur to beginner, intermediate, and advanced, in that order. The average value of the evaluation results for the local area of the entire image shows the same trend without any breakdown, indicating that these two evaluation indices and their integrated indices are effective for presuming the degree of proficiency. Model 1, which uses only the ISOM-AS feature, outperforms intermediate practitioners in terms of local area scores. Model 2 also outperformed the average local area score and had the same performance on average overall score. Model 3 had absolutely the same performance as Model 2.
The following approximate equation is derived by plotting the results of the four synecological farmers and performing quadratic function fitting:
y = 0.0015 x 2 0.2437 x + 9.99842
where x denotes the integrated loss score and y denotes the estimated synecological farming grades.
Applying the above equation, the predicted synecological farming proficiency of Models 1, 2, and 3 was calculated to be 5.5, 5.6, and 5.6, respectively, indicating that the performance was close to the intermediate practitioner grades. The integrated loss scores for intermediate and beginner were 24.3 and 30.3, and their predicted grades were 4.9 and 4.0, based on Equation (9). Considering these results, the intermediate practitioners seem to be a little too inexperienced to be called intermediate, while the beginner practitioners seem to be more proficient than the beginner grade. However, those who were rated as intermediates were more proficient than practitioners who were rated as beginners. This result was quite consistent with a subjective sense of proficiency.

3.4. Path Planning Process and Evaluation

In this experiment, we verify the effectiveness of the robot movement for the predicted sowing position and the sowing operation. Since the target is plants, it is necessary for the robot to avoid damaging them. Each point of the sowing target is calculated in two dimensions (XY), and the path is planned in two-dimensional space. The path planning in XY-space is divided into two parts: searching for a path plan in XY-space and examining the appropriate arm movement height setting in the Z-axis. The approach of this method is implemented using the following three steps:
  • The height at which the robot arm is raised so that it does not damage the plants is determined, and the robot moves through the planned path at that height.
  • When it reaches the target coordinates for sowing in the XY-coordinates, the robot is controlled in the Z-axis direction (vertical direction) to execute sowing.
  • The robot is then raised to the same height in the Z-axis direction. The control is repeated to move on the XY-space to the next sowing coordinate.
Figure 14 shows the flow of the path-planning process, where boxes with square corners and gray coloring indicate data and boxes with rounded corners and no coloring indicate processing. An RGB image, a 2D height map, and the result of the sowing position coordinate estimation are prepared as input. The process is achieved by following five steps:
  • Using the 2D height data, the sowing position coordinates indicated in 2D image space are converted to 3D real space coordinates in units of meter.
  • The RGB image is used as the input to perform the coverage area prediction and grouped uncoverage area prediction processes.
  • By comparing the estimated sowing position and predicted coverage area, it is determined whether the estimated sowing position coordinates are specified on the vegetation or not.
  • If the sowing position coordinates are not on the vegetation, it is assumed that there is no problem. However, if they are on vegetation, the Z-axis is quoted in a higher position and might cause a malfunction, so a non-vegetated area is derived using the results of the grouped uncoverage area prediction process, and the height of the soil in the surrounding area is estimated by deriving the average of the height data for that area. That value is then used as the Z-axis value.
  • The path planning in XY-space is estimated as the appropriate arm movement height in the Z-axis is set, and the output is a control moment matrix in XYZ-space, in which a series of robotic movements are generated.
Figure 14. Process flow for robot sowing control.
Figure 14. Process flow for robot sowing control.
Agriculture 15 01536 g014
This section describes the results of the path planning evaluation experiment. Here, we compare the results of path planning using three methods for sowing coordinates in XY-space: the Random method, the Greedy method, and the 2-Opt Neighborhood method. The Random method selects each sowing point as a reference point in a random manner. The Greedy method is an algorithm that searches for the closest point to the current point repeatedly. The 2-Opt Neighborhood method seeks a more efficient path by repeatedly performing a neighborhood operation, which is an operation to replace any two edges in the solution after setting the initial solution. In addition, for the Greedy and 2-Opt Neighborhood methods, we developed a path planning mode that considers the control speed parameters of each axis of the robot and evaluates the results. Figure 15 shows an example of the actual path planning results of every method in Image 1. As described in Section 2.2 and Section 2.3, the position of the tip of the sowing interface is (0, −0.13, −0.14) in the (X, Y, Z) [m] space from the sensor position, and the path planning is estimated using those coordinates as the initial position. The control speed of each axis of the robot is 45 m/min in the X-axis, 30 m/min in the Y-axis, and 0.36 m/min in the Z-axis. The path planning is estimated considering these parameters of the robot’s control speed.
In Figure 15, the yellow circle with a red frame indicates the initial position. As we can see in Figure 15a, the Random method does not perform path planning adequately and results in inefficient motion. In contrast, path planning using the Greedy method is efficient to some extent, but since the algorithm is based on the method of finding the nearest point from the current point and moving to that point, there are some inefficiencies when looking at the overall path. In particular, the transition to the last remaining point inevitably involves traveling a long distance, a drawback that becomes even more apparent when the path planning target is wider. In terms of the overall path, the result using the 2-Opt Neighborhood method is more efficient, and the inefficient movement that occurs in the latter half of the path in the Greedy method is eliminated, resulting in a more efficient path estimation.
Next, in Figure 15b,c, which show the results for the same Greedy method but with and without consideration of the robot’s control speed parameter, the path is more efficient. However, it is just by chance that the change has occurred: the X-axis can move 1.5 times faster than the Y-axis in terms of control speed, so the robot can be controlled more efficiently and in a shorter time by using more horizontal movements than vertical movements. Regarding the robot control speed, the 2-Opt Neighborhood method had no change in planned paths, as shown in Figure 15d,e, which depict this method with and without consideration of the robot control speed parameters. This is because the parameter difference for each axis was small, and it is therefore expected that a larger parameter difference would have resulted in different planned paths.
Table 4 shows the average values of the total path length [m] and total control time [s] for the three scenes in Images 1–3 when the path planning is performed for each method and the height of the Z-axis is set. The total path length is calculated by adding the total path length of the area in the XY-space and the path length in the Z-axis. The ratio of the path length reduced by each method is shown in the “Reduction rate” column. The total time required for control in the XY-space, the time required for control in the Z-axis, the time required for sowing operation by the sowing mechanism, the total time required for control, and the percentage of control time reduced by each method are all shown. The total time required for control is calculated by adding the time required for controlling the XY-space, the Z-axis, and sowing operation. The top item in Table 4, “Random & Height: 1.1 m”, shows the simulation results when Random is used to set the order of the path in the XY-space for the sowing position, and the robot always moves by pulling up to a height of 1.1 m, which is the maximum height of the robot, after sowing work is performed at each designated sowing coordinate. “Height: 0.8 m” indicates the case in which the robot is set to raise the vegetation height to a level higher than the height of all plants in Images 1, 2, and 3, which in this study is set to 80 cm. This setting is verified assuming a case when a person visually sees and designates the parameter of the Z-axis. “Height: Highest” measures the 3D data for each scene and sets the raised height to the highest height of the obtained data, with a margin of 3 cm. “Greedy” indicates the path planning in XY-space using the Greedy method, “2-Opt” indicates the path planning in XY-space using the 2-Opt Neighborhood method, and “RRP” stands for “Regarding the robotics motor control parameters”, in which the path planning process is performed under the parameters of the robot’s control velocity axes.
The results in Table 4 show that the path length and control time in XY-space are improved by about 47% when changing the path planning from the Random method to the Greedy method. The ± values shown after the numbers indicate standard deviations. In the Greedy method, by considering the control speed parameter of the robot, the path length is slightly improved, while the time required for control is slightly increased. However, the Greedy method itself does not always produce the optimal solution because it is a method for path planning that performs an elementary neighborhood search, and there was not much difference between the control speed parameters of the robot’s X-axis and Y-axis, suggesting that the paths to be explored were not long enough. Next, when the Greedy method was changed to the 2-Opt Neighborhood method, the path length and control time in XY-space were improved by about 5%. When the Z-axis height of the robot arm was increased from 1.1 m to 0.8 m when moving in XY-space for each robot arm task, the Z-axis path length and required control time were improved by about 30%. Furthermore, when the height of the Z-axis was improved from 0.8 m to the height corresponding to the measured 3D data, the path length of the Z-axis and the time required for control were improved by about 34%, and the control time improved by 52%. Overall, the path length and control time required for the Z-axis accounted for most of the total path length and control time, which had a significant impact.

3.5. Integrated Overall Evaluation Results

Table 5 and Figure 16 show a comparison of the total average time required for each method, including the average processing time for sowing position estimation and path planning and the average total control time required for sowing. We picked up three sets of methods, which are the major methods used in this paper. The ± values shown after the numbers indicate standard deviations. Note that for Model 1 (Model 3) and Model 2 (Model 3), the processing times for sowing position estimation were measured using Model 1 and Model 2, respectively, but the control time required for sowing is based on the result with Model 3. This is because the control time has a longer time scale than the processing time (see Figure 16), so if the control time is different, the difference in the processing time will be hidden. When the model for sowing position estimation is changed from Model 1 to Model 2 or Model 3, the processing time increases by about 11%. When the path planning is changed from a Random to a Greedy method and a 2-Opt Neighborhood method that considers the control speed parameter of the robot, the processing time decreased by about 2%, and 2-Opt increased by about 1%, but the control time in XY-space improved by 46% and 51%. Most significantly, comparing Model 1 (Model 3) & Random & Height: 1.1 m with Model 3 & 2-Opt_RRP & Height: Highest, the reduction rate is 52%, and a reduction from 57 min to 28 min was achieved in the total time required. When sowing control is applied to a farm area of 352 cm2 (0.0352 m2) of 16 cm × 22 cm, the current simulation results show that the average time required is about 28 min.

4. Discussion

In this paper, we proposed an automated sowing operation method for a robot developed to correspond with a densely mixed polyculture environment by adopting the RGB-depth sensor Kinect v2 on the robot and achieved a highly accurate automation of sowing. We evaluated the designated sowing locations by formulating a framework for determining the number of sowings and the sowing coordinates that are effective for achieving a densely mixed polyculture and building augmented ecosystems, which has never been done before. With this framework, we were able to learn from the knowledge gained through experience by using data subjectively labeled by practitioners of synecological farming and achieved a nearly intermediate level of performance in synecological farming proficiency in estimating the sowing location. The path planning and control time simulation successfully reduced the control time by 51% compared to the reference operation. When sowing control is applied to a farm area of 1.186 m2 of 91 cm × 130 cm, the simulation shows that the average time required is about 28 min.
As future work, first we need to conduct experiments in the field. In theory, it is believed that the process described in this paper can be implemented fully automatically without any operator intervention when used in a real field. However, there are cases where sowing fails even when the sowing interface is verified on its own, and in synecological vegetation, where an infinite variety of situations can arise, it is expected that some kind of malfunction will occur. To overcome these issues, operator intervention, additional recognition processing, or the development of a new interface may be necessary. Regarding seeding control, even when seeding is designated on a plant’s leaves, the current interface is designed to automatically recognize and seed into the soil beneath the leaves if the leaves are soft or the coordinates do not overlap with the stem, allowing seeding operations to be performed without issues. However, in cases where the leaves are hard, the coordinates overlap with the trunk, or the vegetation is more complex, there is a possibility that seeding operations may not be completed correctly. In such cases, improvements, such as an interface to move the leaves aside or control mechanisms to execute seeding by circling around the area, are likely to be necessary.
Next, regarding the result of the simulations, there are two key technical areas to be explored.
(1)
To improve the accuracy of sowing location estimation, currently, the number of sowings per area is estimated by extracting 218-dimensional image features from the image and using four types of information as features: the output of the ISOM-AS inter-personal objectivity model, the coverage rate, the uncovered area rate, and the sensed plantation height average value. The number of sowing estimation model performances is presumed to be between intermediate and advanced in terms of the synecological farming method’s proficiency grade. To improve versatility, we recommend collecting and training a large amount of data on various vegetation conditions, since the number of training and test data is currently small. In addition, to improve accuracy, it would be effective to increase the number of features that are deemed necessary for sowing, as the four features currently used are still considered insufficient as information for decision-making by experts in sowing. Specifically, the information on the current plantation height uses only the average value, so there is a high possibility that effective features can be found by increasing the resolution of this vegetation height information as a feature value.
(2)
To speed up the sowing operation, the current operation time is still not fast enough for actual use. In our evaluation, a comprehensive comparison was made from the processing time of the software that performs the estimation calculation processing to the actual control time required for the hardware control of the robot, and it was found that the bottleneck of the overall time required is the control time of the Z-axis. There are two major approaches to improving this. The first is to make the path planning more sophisticated. Specifically, there is room for improvement in the optimization of the Z-axis, potentially by improving the software to minimize the Z-axis movement, which performs adaptive path planning to the plantation height of the observed area. The second approach is to increase the Z-axis movement speed, since the Z-axis movement speed of the current hardware is slow compared to others.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/agriculture15141536/s1, Table S1. List of Python functions used, Table S2. Image feature list designed, Figure S1. Result of sowing position predicted by Model 3, Figure S2. Result of sowing position predicted by Model 2, Figure S3. Result of sowing position predicted by Model 1.

Author Contributions

S.A. developed the algorithm; S.A. and T.O. performed the experiments; T.O., M.F. and A.T. helped draft the manuscript; and S.A. authored the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Sony CSL.

Data Availability Statement

The field image data are available upon reasonable request.

Acknowledgments

This study was conducted with the support of the Research Institute for Science and Engineering, Waseda University, and the Future Robotics Organization, Waseda University. The experimental field, seeds, and seedlings were provided by Sustainergy Company. The supporting organizations had no control over the interpretation, writing, or publication of this study, and we wish to thank them for their technical and logistical support. Synecoculture™ is a trademark of Sony Group Corporation. Finally, we thank Shino Aotake, Shisei Tanaka, Asaka Miyata, and Satoru Okamoto, who helped with data labeling, and Akito Doi, who provided information on sowing mechanisms in robotics.

Conflicts of Interest

Shuntaro Aotake and Masatoshi Funabashi were employed by the Sony Computer Science Laboratories, Inc. All authors declare no conflicts of interest.

References

  1. Funabashi, M. Human augmentation of ecosystems: Objectives for food production and science by 2045. NPJ Sci. Food 2018, 2, 16. [Google Scholar] [CrossRef] [PubMed]
  2. Funabashi, M. Synecological farming: Theoretical foundation on biodiversity responses of plant communities. Plant Biotechnol. Spec. Issue Plants Environ. Responses 2016, 33, 213–234. [Google Scholar] [CrossRef] [PubMed]
  3. Funabashi, M. Synecoculture Manual 2016 Version (English Version). Research and Education Material of UniTwin UNESCO Complex Systems Digital Campus, e-Laboratory: Open Systems Exploration for Ecosystems Leveraging 2016, 2. Available online: https://synecoculture.sonycsl.co.jp/public/2016%20Synecoculture%20Manual_compressed.pdf (accessed on 2 December 2024).
  4. Funabashi, M. Power-law productivity of highly biodiverse agroecosystems supports land recovery and climate resilience. NPJ Sustain. Agric. 2024, 2, 8. [Google Scholar] [CrossRef]
  5. Ohta, K.; Kawaoka, T.; Funabashi, M. Secondary Metabolite Differences between Naturally Grown and Conventional Coarse Green Tea. Agriculture 2020, 10, 632. [Google Scholar] [CrossRef]
  6. Aotake, S.; Takanishi, A.; Funabashi, M. Modeling ecosystem management based on the integration of image analysis and human subjective evaluation—Case studies with synecological farming. Lect. Notes Comput. Sci. 2023, 13927, 151–164. Available online: https://synecoculture.sonycsl.co.jp/public/20230420%2033_fullpaper.pdf (accessed on 2 December 2024).
  7. Otani, T.; Itoh, A.; Mizukami, H.; Murakami, M.; Yoshida, S.; Terae, K.; Tanaka, T.; Masaya, K.; Aotake, S.; Funabashi, N.; et al. Agricultural Robot under Solar Panels for Sowing, Pruning, and Harvesting in a Synecoculture Environment. Agriculture 2023, 13, 18. [Google Scholar] [CrossRef]
  8. Doi, A.; Maeda, N.; Tanaka, T.; Masaya, K.; Aotake, S.; Funabashi, M.; Miki, H.; Otani, T.; Takanishi, A. Development of the Agricultural Robot in Synecocutlure™ Environment (8th Report, Development of sow planting mechanism for multiple sows interchangeable at the end of the arm and sow dumpling making machine). J. Robot. Soc. Jpn. 2024, 42, 10, 1031–1034. [Google Scholar] [CrossRef]
  9. Gaston, K.; O’Neill, M.A. Automated species identification: Why not? Philos. Trans. R. Soc. Lond. B Biol. Sci. 2004, 359, 655–667. [Google Scholar] [CrossRef] [PubMed]
  10. Pimm, S.; Alibhai, S.; Bergl, R.; Dehgan, A.; Giri, C.; Jewell, Z.; Joppa, L.; Lays, R.; Loarie, S. Emerging technologies to conserve biodiversity. Trends Ecol. Evol. 2015, 30, 685–696. [Google Scholar] [CrossRef] [PubMed]
  11. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed]
  12. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  13. Wäldchen, J.; Rzanny, M.; Seeland, M.; Mäder, P. Automated plant species identification—Trends and future directions. PLoS Comput. Biol. 2018, 14, e1005993. [Google Scholar] [CrossRef] [PubMed]
  14. Tian, L.; Slaughter, D. Environmentally adaptive segmentation algorithm for outdoor image segmentation. Comput. Electron. Agric. 1998, 21, 153–168. [Google Scholar] [CrossRef]
  15. Wäldchen, J.; Mädaer, P. Machine learning for image-based species identification. Methods Ecol. Evol. 2018, 9, 2216–2225. [Google Scholar] [CrossRef]
  16. Carranza-Rojas, J.; Goeau, H.; Bonnet, P.; Meta-Montero, E.; Joly, A. Going deeper in the automated identification of Herbarium specimens. BMC Evol. Biol. 2017, 17, 181. [Google Scholar] [CrossRef] [PubMed]
  17. Joly, A.; GoeÉau, H.; Bonnet, P.; Bakic, V.; Barbe, J.; Selmi, S.; Yahiaoui, I.; Carré, J.; Mouysset, E.; Molino, J.F.; et al. Interactive plant identification based on social image data. Ecol. Inform. 2014, 23, 22–34. [Google Scholar] [CrossRef]
  18. Umar, M.; Altaf, S.; Ahmad, S.; Mahmoud, H.; Mohamed, A.S.N.; Ayub, R. Precision Agriculture Through Deep Learning: Tomato Plant Multiple Diseases Recognition With CNN and Improved YOLOv7. IEEE Access 2024, 12, 49167–49183. [Google Scholar] [CrossRef]
  19. Yu, F.; Zhang, Q.; Xiao, J.; Ma, Y.; Wang, M.; Luan, R.; Liu, X.; Ping, Y.; Nie, Y.; Tao, Z.; et al. Progress in the Application of CNN-Based Image Classification and Recognition in Whole Crop Growth Cycles. Remote Sens. 2023, 15, 2988. [Google Scholar] [CrossRef]
  20. Lu, J.; Tan, L.; Jiang, H. Review on Convolutional Neural Network (CNN) Applied to Plant Leaf Disease Classification. Agriculture 2021, 11, 707. [Google Scholar] [CrossRef]
  21. Congalton, R.; Gu, J.; Yadav, K.; Thenkabail, P.; Ozdogan, M. Global land cover mapping: A review and uncertainty analysis. Remote Sens. 2014, 6, 12070–12093. [Google Scholar] [CrossRef]
  22. Khatami, R.; Mountrakis, G.; Stehman, S.V. A meta-analysis of remote sensing research on supervised pixel-based land-cover image classification processes: General guidelines for practitioners and future research. Remote Sens. Environ. 2016, 177, 89–100. [Google Scholar] [CrossRef]
  23. Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2017, 130, 270–293. [Google Scholar] [CrossRef]
  24. Fassnacht, F.E.; Latifi, H.; Sterenczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  25. Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  26. Guirado, E.; Tabil, S.; Segura, D.; Cabello, J.; Herrera, F. Deep-Learning versus OBIA for Scattered Shrub Detection with Google Earth Imagery: Ziziphus lotus as Case Study. Remote Sens. 2017, 9, 1220. [Google Scholar] [CrossRef]
  27. Guirado, E.; Segura, D.; Cabello, J.; Ruiz, S.; Herrera, F.; Tabik, S. Tree Cover Estimation in Global Drylands from Space Using Deep Learning. Remote Sens. 2020, 12, 343. [Google Scholar] [CrossRef]
  28. Onishi, M.; Ise, T. Automatic classification of trees using UAV onboard camera and deep learning. arXiv 2018, arXiv:1804.10390. [Google Scholar] [CrossRef]
  29. Goodwin, N.; Turner, R.; Merton, R. Classifying Eucalyptus forests with high spatial and spectral resolution imagery: An investigation of individual species and vegetation communities. Aust. J. Bot. 2005, 53, 337–345. [Google Scholar] [CrossRef]
  30. Dalponte, M.; Bruzzone, L.; Gianelle, D. Remote Sensing of Environment Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
  31. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with random forest using very high spatial resolution 8-band WorldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  32. Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef]
  33. Ise, T.; Minagawa, M.; Onishi, M. Classifying 3 moss species by deep learning using the “Chopped Picture” method. Open J. Ecol. 2018, 8, 166–173. [Google Scholar] [CrossRef]
  34. Watanabe, S.; Sumi, K.; Ise, T. Automatic vegetation identification in Google Earth images using a convolutional neural network: A case study for Japanese bamboo forests. BMC Ecol. 2018, 20, 65. [Google Scholar]
  35. Soya, K.; Aotake, S.; Ogata, H.; Ohya, J.; Ohtani, T.; Takanishi, A.; Funabashi, M. Study of a Method for Detecting Dominant Vegetation in a Field from RGB Images Using Deep Learning in Synecoculture Environment. In Proceedings of the 49th Annual Meeting of the Institute of Image Electronics Engineers of Japan, Online, 24–26 June 2021. [Google Scholar]
  36. Yoshizaki, R.; Aotake, S.; Ogata, H.; Ohya, J.; Ohtani, T.; Takanishi, A.; Funabashi, M. Study of a Method for Recognizing Field Covering Situation by Applying Semantic Segmentation to RGB Images in Synecoculture Environment. In Proceedings of the 49th Annual Meeting of the Institute of Image Electronics Engineers of Japan, Online, 24–26 June 2021. [Google Scholar]
  37. Tokoro, M. Open Systems Science: A Challenge to Open Systems Problems. Springer Proceedings in Complexity 2017. pp. 213–221.. Available online: https://synecoculture.sonycsl.co.jp/public/2017_CSDC_Tokoro.pdf (accessed on 2 December 2024).
  38. Funabashi, M.; Minami, T. Dynamical assessment of aboveground and underground biodiversity with supportive AI. Meas. Sens. 2021, 18, 100167. [Google Scholar] [CrossRef]
  39. Funabashi, M. Augmentation of Plant Genetic Diversity in Synecoculture: Theory and Practice in Temperate and Tropical Zones. Genet. Divers. Hortic. Plants Sustain. Dev. Biodivers. 2019, 22, 3–46. Available online: https://synecoculture.sonycsl.co.jp/public/20191110%20Augmentation%20of%20Plant%20Genetic%20Diversity%20in%20Synecoculture%20-Theory%20and%20Practice%20in%20Temperate%20and%20Tropical%20Zones%20Springer%20Nature%20Masa%20Funabashi.pdf (accessed on 2 December 2024).
  40. iNaturalist Homepage. Available online: https://www.inaturalist.org/ (accessed on 2 December 2024).
  41. SEED Biocomplexity Homepage. Available online: https://seed-index.com/ (accessed on 2 December 2024).
  42. Ohta, K.; Suzuki, G.; Miyazawa, K.; Funabashi, M. Open systems navigation based on system-level difference analysis—Case studies with urban augmented ecosystems. Meas. Sens. 2022, 23, 100401. [Google Scholar] [CrossRef]
  43. Funabashi, M. Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems. Entropy 2017, 19, 181. [Google Scholar] [CrossRef]
  44. Funabashi, M. Open Systems Exploration: An Example with Ecosystem Management. Springer Proceedings in Complexity. 2017. pp. 223–243. Available online: https://synecoculture.sonycsl.co.jp/public/2017_CSDC_Funabashi_OSE.pdf (accessed on 2 December 2024).
  45. Li, H.; Liu, H.; Zhou, J.; Wei, G.; Shi, S.; Zhang, X.; Zhang, R.; Zhu, H.; He, T. Development and First Results of a No-Till Pneumatic Sower for Maize Precise Sowing in Huang-Huai-Hai Plain of China. Agriculture 2021, 11, 1023. [Google Scholar] [CrossRef]
  46. Kumar, P.; Ashok, G. Design and fabrication of smart sow sowing robot. Mater. Today Proc. 2020, 39, 354–358. [Google Scholar] [CrossRef]
  47. Carlos, J.; Choque, M.; Erick, M.; Fiestas, S.; Ricardo, S.; Prado, G. Efficient implementation of a Cartesian Farmbot robot for agricultural applications in the region La Libertad-Peru. In Proceedings of the IEEE ANDESCON 2018, Santiago de Cali, Colombia, 22–24 August 2018; pp. 1–6. [Google Scholar]
  48. FarmDroid Document. Available online: https://farmdroid.com/wp-content/uploads/Brochure-FD20-2023-web.pdf (accessed on 2 December 2024).
  49. Sugiyama, S.; Osawa, K.; Mitani, K.; Itoh, A.; Kondo, T.; Morita, M.; Aotake, S.; Funabashi, M.; Otani, T.; Takanishi, A. Development of an Agricultural Operation Support Robot in a SynecocultureTM Farming Environment (Fourth Report: Development of a Tool-Changeable Cutting Tool for Pruning and Harvesting Multiple Crops). J. Robot. Soc. Jpn. 2022, 41, 889–892. [Google Scholar] [CrossRef]
  50. GeekBench6 Home Page. Available online: https://www.geekbench.com/ (accessed on 2 December 2024).
Figure 1. (a) Main body of agricultural robot. (b) DoF arrangement [7].
Figure 1. (a) Main body of agricultural robot. (b) DoF arrangement [7].
Agriculture 15 01536 g001
Figure 2. CAD of the rail mechanism. (a) X-direction. (b) Y-direction [7].
Figure 2. CAD of the rail mechanism. (a) X-direction. (b) Y-direction [7].
Agriculture 15 01536 g002
Figure 3. (a) Structure of the sowing interface. (b) Operating mechanism of sowing control. (c) Actual photo of the sowing interface [8].
Figure 3. (a) Structure of the sowing interface. (b) Operating mechanism of sowing control. (c) Actual photo of the sowing interface [8].
Agriculture 15 01536 g003
Figure 4. (a) New seed ball production process. (b) Seed ball production machine. (c) A bell pepper seed and a seed ball [8].
Figure 4. (a) New seed ball production process. (b) Seed ball production machine. (c) A bell pepper seed and a seed ball [8].
Agriculture 15 01536 g004
Figure 5. (a) Upper and (b) lower view of Kinect v2 attached to a robot.
Figure 5. (a) Upper and (b) lower view of Kinect v2 attached to a robot.
Agriculture 15 01536 g005
Figure 6. (a) RGB and (b) 3D point cloud data captured by Kinect v2.
Figure 6. (a) RGB and (b) 3D point cloud data captured by Kinect v2.
Agriculture 15 01536 g006
Figure 7. (a) Overhead RGB image of the field and (b) 2D map of height information [m] of the same area.
Figure 7. (a) Overhead RGB image of the field and (b) 2D map of height information [m] of the same area.
Agriculture 15 01536 g007
Figure 8. Data for sowing trend analysis and training, evaluation, and test data of sowing number estimation model.
Figure 8. Data for sowing trend analysis and training, evaluation, and test data of sowing number estimation model.
Agriculture 15 01536 g008
Figure 9. (a) Overhead RGB image of the field (Image 1). (b) Results of the plantation area detection function, where yellow denotes area detected as plantation coverage area. (c) Comparison of detected results with ground-truth labels (Green: true positive, red: false positive, blue: false negative, black: true negative).
Figure 9. (a) Overhead RGB image of the field (Image 1). (b) Results of the plantation area detection function, where yellow denotes area detected as plantation coverage area. (c) Comparison of detected results with ground-truth labels (Green: true positive, red: false positive, blue: false negative, black: true negative).
Agriculture 15 01536 g009
Figure 10. Process flow for estimating sowing position.
Figure 10. Process flow for estimating sowing position.
Agriculture 15 01536 g010
Figure 11. (a) Original image. (b) Divided area image. (c) Topsoil plantation coverage area detection results. (d) Grouped uncovered area with plantation topsoil area detection results. (e) Region segmentation results of K-means clustering using estimation results of sowing number estimation model. (f) Sowing position results in split image.
Figure 11. (a) Original image. (b) Divided area image. (c) Topsoil plantation coverage area detection results. (d) Grouped uncovered area with plantation topsoil area detection results. (e) Region segmentation results of K-means clustering using estimation results of sowing number estimation model. (f) Sowing position results in split image.
Agriculture 15 01536 g011
Figure 12. Results of sowing location estimation using Model 3 in Image 3 and the magnified image of the part of the result image.
Figure 12. Results of sowing location estimation using Model 3 in Image 3 and the magnified image of the part of the result image.
Agriculture 15 01536 g012
Figure 13. Difference between supervised-advanced and every proficiency label data/prediction method on Image 3.
Figure 13. Difference between supervised-advanced and every proficiency label data/prediction method on Image 3.
Agriculture 15 01536 g013
Figure 15. Path planning results for sowing coordinates using the sowing position estimation for Image 3 output by (a) Random method, (b) Greedy method, (c) Greedy method considering the robot’s control velocity of each axis, (d) 2-Opt Neighborhood method, and (e) 2-Opt Neighborhood method considering the robot’s control velocity of each axis.
Figure 15. Path planning results for sowing coordinates using the sowing position estimation for Image 3 output by (a) Random method, (b) Greedy method, (c) Greedy method considering the robot’s control velocity of each axis, (d) 2-Opt Neighborhood method, and (e) 2-Opt Neighborhood method considering the robot’s control velocity of each axis.
Agriculture 15 01536 g015
Figure 16. Simulated overall average required time for each method.
Figure 16. Simulated overall average required time for each method.
Agriculture 15 01536 g016
Table 1. Topsoil plantation coverage ratio and number of sowings specified by each practitioner on each image.
Table 1. Topsoil plantation coverage ratio and number of sowings specified by each practitioner on each image.
DataImage 1Image 2Image 3
Number of sowingsAmateur42078
Beginner401931
Intermediate302427
Advanced81726
Supervised-Advanced131726
Coverage [%]47.0553.5142.53
Table 2. Overview of model parameters and performance of estimating the number of sowings.
Table 2. Overview of model parameters and performance of estimating the number of sowings.
ModelRequired DataFeaturesModel NameTrain Data ScoreEvaluation
Data Score
Average
Processing Time [s]
Model 1RGB ImageISOM-ASLinear regression0.2880.26617.1 ± 1.7
Model 2RGB ImageISOM-AS, Coverage ratio,
Grouped uncoverage ratio
Ridge regression0.3510.29318.7 ± 1.4
Model 3RGB Image
and Depth Data
ISOM-AS, Coverage ratio,
Grouped uncoverage ratio,
Sensed height average
Ridge regression0.3520.29218.7 ± 2.4
Table 3. Comparison of the number of sowing specifications for human-labeled data and training models with supervised-advanced data in test data.
Table 3. Comparison of the number of sowing specifications for human-labeled data and training models with supervised-advanced data in test data.
ModelSum Differences of Absolute
Values of Local Area
Difference of All AreasIntegrated
Loss Score
Image 1Image 2Image 3AverageImage 1Image 2Image 3Abs Average
Amateur4217704329−175232.775.7
Beginner3281719272511.330.3
Intermediate2411131617718.324.3
Advanced5001.7−5001.73.3
Model 18121612210169.321.3
Model 28121712.3210138.320.7
Model 38121712.3210138.320.7
Grades were set as Amateur = 0, Beginner = 3, Intermediate = 6, Advanced = 9, and Supervised-Advanced = 10.
Table 4. Simulation results of total path length (m) and total control time (sec) of paths planned by each method.
Table 4. Simulation results of total path length (m) and total control time (sec) of paths planned by each method.
MethodAverage Total Path LengthAverage Total Control Time
XY-Space
[m]
Z-Axis
[m]
Total
[m]
Reduction
Rate
XY-Space
[s]
Z-Axis
[s]
Sowing
Action [s]
Total
[s]
Reduction
Rate
Random
&Height: 1.1 m
7.4 ± 1.419.9 ± 5.827.3 ± 7.2Baseline for comparison8.5 ± 1.63323 ± 96398.2 ± 29.03430 ± 994Baseline for comparison
Greedy
& Height: 1.1 m
3.5 ± 0.519.9 ± 5.823.4 ± 6.3−14.3%4.5 ± 0.63323 ± 96398.2 ± 29.03426 ± 993−0.12%
2-Opt
& Height: 1.1 m
3.4 ± 0.119.9 ± 5.823.3 ± 5.9−14.7%4.4 ± 0.23323 ± 96398.2 ± 29.03426 ± 993−0.12%
2-Opt_RRP
& Height: 1.1 m
3.2 ± 0.419.9 ± 5.823.1 ± 6.2−15.4%4.1 ± 0.53323 ± 96398.2 ± 29.03425 ± 992−0.15%
2-Opt_RRP
& Height: 0.8 m
3.2 ± 0.414.0 ± 4.017.2 ± 4.4−37%4.1 ± 0.52340 ± 65898.2 ± 29.02442 ± 688−29%
2-Opt_RRP
& Height: Highest
3.2 ± 0.49.4 ± 6.612.6 ± 7.0−54%4.1 ± 0.51558 ± 110498.2 ± 29.01660 ± 1134−52%
Table 5. Simulated overall average required time with each method.
Table 5. Simulated overall average required time with each method.
MethodAverage Required Time [min]Reduction
Rate
Sowing Position
Estimation Process
Route Planning
Process
XY-Space
Control
Z-Axis
Control
Sowing
Control
Total Required
Time
Model 1 (Model 3)
& Random
& Height: 1.1 m
0.28 ± 0.030.272 ± 0.0030.14 ± 0.0355 ± 161.6 ± 0.557 ± 17Baseline for comparison
Model 2 (Model 3)
& Greedy
& Height: 0.8 m
0.31 ± 0.020.267 ± 0.0070.075 ± 0.0139 ± 111.6 ± 0.541 ± 12−28%
Model 3
& 2-Opt_RRP
& Height: Highest
0.31 ± 0.040.274 ± 0.0040.068 ± 0.0126 ± 181.6 ± 0.528 ± 19−51%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aotake, S.; Otani, T.; Funabashi, M.; Takanishi, A. Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge. Agriculture 2025, 15, 1536. https://doi.org/10.3390/agriculture15141536

AMA Style

Aotake S, Otani T, Funabashi M, Takanishi A. Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge. Agriculture. 2025; 15(14):1536. https://doi.org/10.3390/agriculture15141536

Chicago/Turabian Style

Aotake, Shuntaro, Takuya Otani, Masatoshi Funabashi, and Atsuo Takanishi. 2025. "Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge" Agriculture 15, no. 14: 1536. https://doi.org/10.3390/agriculture15141536

APA Style

Aotake, S., Otani, T., Funabashi, M., & Takanishi, A. (2025). Data-Efficient Sowing Position Estimation for Agricultural Robots Combining Image Analysis and Expert Knowledge. Agriculture, 15(14), 1536. https://doi.org/10.3390/agriculture15141536

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop