1. Introduction
As the livestock industry undergoes structural transformation, the sheep industry—recognized as a grain-saving livestock sector—has made large-scale production a key benchmark for modernizing animal husbandry. Sheep are raised using various methods, including mixed, pasture-based, and stall feeding, with stall feeding offering advantages in improving production scale and standardization. However, current stall-feeding practices heavily depend on manual labor, leading to high workloads and increased risk of zoonotic disease transmission. In response to emerging policies, intelligent unmanned feeding robots are becoming a promising development direction. Although existing automated feeding systems—such as track-mounted or bicycle-type feeders—help reduce labor intensity, they face limitations including high infrastructure requirements, high costs, limited adaptability to individualized feeding, and low levels of automation. They still require on-site human intervention for parameter adjustment and system monitoring. To address these challenges, there is an urgent need to develop advanced intelligent feeding robots equipped with environmental sensing, autonomous decision-making, precise localization, and target recognition capabilities. These robots should autonomously navigate into and out of sheep sheds, identify pen markers, integrate with weighing sensors for accurate and individualized feeding, and incorporate obstacle avoidance to ensure safe and efficient operations. Such systems would significantly enhance feeding precision, reduce labor demands, and accelerate the intelligent transformation of sheep farming.
Accurate environmental perception inside and outside the sheep pens, along with robust navigation capabilities in livestock farm environments, are critical technologies for intelligent feeding robots. These serve as foundational guarantees for achieving precise, efficient, and intelligent feeding. Key tasks include slope detection and real-time obstacle avoidance.
The Lely Vector intelligent feeding system developed by Dutch company Lely (Maassluis, The Netherlands) [
1] can perform autonomous navigation tasks in livestock areas. The device is equipped with a feed trough height monitoring system, which can automatically trigger precise feeding programs based on pre-set thresholds. D. Piwcznski et al. [
2] proposed an automated milking system that combines milking robots with feeding stations to effectively increase milk production and milking efficiency. Jangho Bae et al. [
3] developed a total mixed ration (TMR) feeding robot capable of carrying up to 2 tons of feed. The robot uses lidar and cameras to recognize its environment, relies on RFID for positioning, and allows staff to obtain the robot’s current location via RFID. S. M. Mikhailichenko et al. [
4] used a graph theory-based simulation method to optimize the volume of feed spreaders while considering feed distribution time.
With the development of computer technology, simultaneous localization and mapping (SLAM) technology is being applied more and more widely in robot localization and mapping.
By enhancing robustness and real-time performance [
5,
6,
7,
8,
9,
10], multi-sensor fusion [
11,
12] improves the stability and accuracy of SLAM systems in complex environments, while error modeling is also employed to enhance positioning accuracy and robustness. Shen et al. [
13] proposed a laser radar inertial SLAM system based on the LIO-SAM framework, further improving matching accuracy. Cao et al. [
14] contributed to lightweight design with their BEV-LSLAM algorithm. Lin et al. subsequently proposed the R2LIVE algorithm [
15] and R3LIVE algorithm [
16] to improve localization accuracy, while Wang et al. [
17] proposed the F-LOAM algorithm, which uses scan-to-map optimization of pose on the basis of the Ceres Solver [
18] to enhance system operational efficiency. By integrating multiple strategies to improve planning efficiency and path quality [
19,
20,
21,
22,
23], incorporating machine learning and intelligent strategies to enhance adaptability [
24,
25,
26], combining door recognition visual perception methods [
27] to enhance practicality and accuracy, improving robustness and practicality [
28,
29,
30], and enhancing search efficiency to reduce invalid nodes for obstacle avoidance and optimal path construction [
31,
32,
33,
34].
However, despite these advancements, several key limitations persist in the existing research: Limited Adaptability to Complex Farm Environments: Many of the existing robotic feeding systems, such as the Lely Vector system [
1] and Jangho Bae’s TMR feeding robot [
3], perform well in controlled environments but are less effective when deployed in dynamic, complex farm conditions where obstacles and terrain vary significantly. These systems still require significant human intervention for parameter adjustment and system monitoring. Lack of Integration of Multi-Sensor Fusion for Robust Navigation: Although advancements in SLAM and multi-sensor fusion have been made (e.g., LIO-SAM, BEV-LSLAM, and R2LIVE algorithms), current systems still struggle with real-time decision-making and robust environmental perception, especially in unpredictable environments. For instance, despite the advancements made by Shen et al. [
13] and Cao et al. [
14], robust integration of LiDAR, cameras, and environmental sensors for dynamic obstacle avoidance and precise path planning remains a challenge.
This research aims to address these limitations by developing an intelligent feeding robot that integrates advanced multi-sensor fusion, robust SLAM navigation, and autonomous decision-making capabilities to enable individualized feeding in complex, dynamic farm environments. Unlike current systems, our approach focuses on the seamless integration of various sensors for real-time obstacle avoidance, dynamic path planning, and precise feeding control, with the goal of reducing labor intensity and improving operational efficiency in modern sheep farming.
2. Materials and Methods
This feeding robot utilizes advanced technologies to automate the feeding of sheep, as shown in
Figure 1. It adopts a modular design architecture, with the control unit centered around a YCT-IPC-065 industrial computer (Shenzhen Yanchengtuo Technology Co., Ltd., Shenzhen, China). Equipped with a laser radar (Livox, Shenzhen, China), the robot can calculate the target’s position in 3D space and perform 3D modeling. The sensor system is a core component of the robot, consisting of a front camera Astra Pro Camera (ORBBEC, Shenzhen, China), a side camera Drive Free Camera (Yahboom, Shenzhen, China), a temperature and humidity sensor, a weighing sensor, and a rail-mounted weighing module. The front camera integrates the high-precision distance measurements from the laser radar with the color information from the camera, generating a point cloud that contains both spatial location and color features. The side camera is used to identify pen numbers within the sheep shed. The temperature and humidity sensor monitors the environmental conditions in real time. The weighing sensor and the rail-mounted weighing module monitor the feed weight in the silo, enabling precise control of feed dispensing. The central control unit coordinates the operation of all components and processes data to make decisions, while the power supply system ensures continuous and stable operation of the robot. Through the coordinated function of these hardware components, the feeding robot can efficiently and autonomously complete sheep feeding tasks.
The software framework of the sheep feeding robot is divided into five functional nodes: environmental perception, map building, path planning, motion control, and target recognition. The structure is illustrated in
Figure 2.
- (1)
Environment sensing node: collect environmental information and robot position data using LiDAR, IMU, wheel odometer and camera.
- (2)
Map construction node: the acquired environmental data are used to construct a 3D map of the surrounding environment and a point cloud rendering map after the fusion of LIDAR and camera data, and the sheep feeding robot localizes itself based on this map model and projects its position. By using the Octomap function package v1.9.8., the 3D point cloud map is converted into a 3D raster map, while retaining the key spatial information in the 3D model of the environment.
- (3)
Path planning node: Visualize the raster map in Rviz, set the start point and end point, the path planning node receives the map data sent by the map module, and the global path planning algorithm ensures that the robot efficiently selects the best path from the start point to the end point. When the robot advances along this path, it acquires point cloud data in real time, and the local path planning algorithm avoids dynamic obstacles by adjusting linear and angular velocities.
- (4)
Motion control node: the industrial computer sends control commands to transfer acceleration, angular velocity and other information to the motor drive node, and the drive node controls the motor accordingly to realize the robot motion.
- (5)
Target recognition node: the entrance of the sheep house is uphill and the exit is downhill. The industrial control machine analyzes the slope detection of the ground data acquired by the LiDAR to determine whether the robot reaches the entrance or exit of the sheep house. The ICP acquires the picture of the sheep pen sign inside the sheep house through the side camera and uses the YOLOv8s model to complete the recognition of the sheep pen sign to realize accurate material spreading.
Camera lenses and sensors may introduce radial aberrations and tangential aberrations leading to straight-line curvature or inaccurate shape of the object, so it is necessary to accurately obtain the internal parameters of the camera, and to correct the aberrations through the internal parameters. Firstly, a black and white checkerboard grid with 118 number of internal corner points and 0.02 m length of small square sides is prepared.
The internal parameters of the camera were solved using the ros software package, and the calibration program appeared after inputting the number of internal corner points and the size of the checkerboard grid. By tilting the black and white checkerboard grid back and forth, left and right, up and down, and tilting the black and white checkerboard grid, the results of “X”, “Y”, ‘Size’, and “Skew” are all green. Click “CALIBRATE” to obtain the internal reference of the camera after the result of “X”, “Y”, ‘Size’ and “Skew” is green. Click “SAVE” to write the calibration results of the front and side cameras into the “ost.yaml” file.
Use Livox’s official open-source livox_lidar_camera_calibration for joint calibration of the front camera and lidar. The calibration process is as follows: (1) Prepare an 80 × 80 cm black-and-white checkerboard with 9 internal corners and small squares of 0.08 m side length. Install the source code in the ROS workspace (ROS1) and compile it. Write the obtained front camera internal parameters to a file. Place the checkerboard at different positions, record 10 s of lidar point cloud data and camera images at each position, and name the point cloud data and camera images identically at the same position. Run the program, click on the four corners of the checkerboard in order, and record the coordinates of the four corners in each photo. Finally, write the coordinates of all corners in the photos into a file. Run the program to convert the lidar point cloud data packages in the folder into visualizable files. Open the PCD file, obtain the x, y, and z coordinates of each corner point according to the order in which the corner points were obtained, and finally write the coordinates of all corner points in the PCD files into a file. Run the command in the workspace to iteratively calculate the data in the file and solve for the external parameters of the camera and lidar.
The coordinate system of the sheep feeding robot is expressed in Cartesian coordinate system, and the
global coordinate system and the
robot coordinate system are usually used to describe the position of the robot. The robot coordinate system
points to the front of the robot, and the orientation angle
of the robot represents the rotation angle of the subject coordinate system relative to the global coordinate system, and the mapping of the subject coordinate system to the global coordinate system can be represented by an orthogonal rotation matrix as shown in Equation (1).
The robot is driven by differential rear wheels, according to the differential kinematic model as shown in
Figure 3.
The bit position of the sheep feeding robot at a given moment can be expressed as Equation (2):
According to the principles of statics, the forward kinematic equation for the differential motion of the sheep feeding robot can be obtained as shown in Equations (3) and (4):
In the global coordinate system, the coordinates and motion state of the point R approximately represent the robot’s position. Therefore, the robot’s position can be obtained by integrating and solving Equation (4) as follows:
The trajectory of the robot in a short time interval is approximated as a straight line and restricts the robot to only forward or rotational motion, as shown in
Figure 4.
Within the time interval
, the robot’s pose transformation can be regarded as a combination of translating a distance
and rotating an angle
. In the global coordinate system, the coordinate change in the robot’s center of gravity from time
to time
in Equation (6) is as follows:
The pose of the robot at time k + 1 can be expressed as Equation (7):
According to Equation (2), the linear velocity and angular velocity at the robot’s center of gravity are (8) and (9), respectively:
In the case of
, the robot carries out linear motion and the trajectory is a straight-line segment; in the case of
, the trajectory of the robot is a curve. According to Equations (2), (3), (8) and (9), the inverse kinematics of the sheep feeding robot can be expressed as:
in other words:
In the motion control of the sheep feeding robot, the initial linear and angular velocities at the center of mass of the robot are set according to the actual situation, and the actual linear and angular velocities of the left and right driving wheels are calculated according to Equations (10) and (11). The speed of the drive wheels needs to be adjusted by the motor speed, and Equation (12) is the formula for calculating the rotational speed and the speed of the drive wheels, where
and
are the rotational speeds of the left and right drive wheels, respectively.
The calibration relationship between the LIDAR and the frontal camera correlates the laser data with the image data and renders the point cloud data of the 3D LIDAR to obtain the color laser, so as to obtain the real ground environment map. Acquire the point cloud data of LiDAR and the image information of the frontal camera; store the x, y, z coordinates and intensity value of each point cloud data through a loop; convert the point cloud data from the coordinate system of LiDAR to the coordinate system of the camera through matrix multiplication according to the internal and external parameters obtained from the joint calibration of LiDAR and the frontal camera. Calculate the position of each point cloud data in the coordinate system of the camera, keep the pixel points that are located in the image range and have an x-value greater than 0 in the coordinates of the point cloud, obtain the color information from the image through the coordinate system of the camera, and assign the RGB color to the valid point cloud data. The point cloud data assigned with RGB color is converted from camera coordinate system to world coordinate system, and then the point cloud data is converted to ROS message format, and the point cloud message is published through ROS.
3D maps constructed by 3D LIDAR with FAST-LIO2 algorithm cannot be directly used for navigation. The 3D point cloud data is transformed into raster map in 2D environment by Octomap_server, which preserves the height information in the environment model. The specific scheme is shown in
Figure 5.
In the ROS navigation system, the raster map consists of several small grid cells, each of which can store different values. Typically, white areas represent passable areas, corresponding to a raster value of 0; black areas represent obstacles, with a raster value of 100; and gray areas represent unknowns, where passability is still unclear, corresponding to a raster value of −1.
Converting the 2D raster map requires configuring the Octomap_server startup file according to the point cloud topic and the Z-axis height. The point cloud topic for the FAST-LIO2 algorithm is “camera_init”, and the overall height of the sheep-feeding robot is 1.54 m. Therefore, adjusting the Z-axis projection range is .
Sheep feeding robots require good path planning to achieve intelligence. The robot loads at the loading point and then travels to the designated sheep shed. There are various known and unknown obstacles in the environment where the robot reaches the designated sheep shed from the loading point. An improved two-way RRT* algorithm is proposed as a global path planning method for mobile robots. The improved Dynamic Window Approach (DWA) is used as a local path planning strategy for the mobile robot.
The artificial potential field method is introduced, which is mainly used for obstacle avoidance and target guidance. The core of this method is to construct a gravitational potential field at the target point and a repulsive potential field around the obstacles according to the distribution of the environment map and obstacles, and the robot moves along the direction of the combined force of the attractive force and the repulsive force.
The gravitational potential energy is positively correlated with the Euclidean distance between the robot and the target point, and the gravitational potential energy
is expressed by Equation (13), where
is the gain coefficient of the gravitational field, and
is the Euclidean distance between the robot’s location and the target point
, and the direction is pointing to the target point from the robot’s location. The attraction
can be expressed by Equation (14).
The obstacle generates repulsive potential energy on the mobile robot, but in order to prevent the potential energy generated by the obstacle from affecting each other, a range of influence is set for it, and the repulsive force is applied when the mobile robot enters into the range of influence. If it does not enter the influence range, the repulsive force is zero. And the size of the repulsive potential energy is negatively correlated with the Euclidean distance between the mobile robot and the obstacle. The repulsive potential energy function
is expressed by Equation (15).
is the repulsive potential field gain coefficient, which is used to adjust the size and influence range of the repulsive potential field generated by the obstacle.
denotes the nearest distance from the robot’s position
to the obstacle.
is the defining distance, and the repulsive potential energy is zero when
is greater than the defining distance
, in other words., the repulsive force is generated only for obstacles within the defined distance.
The repulsive force is the negative gradient of the repulsive potential energy, which can be expressed by Equation (16).
The robot is subject to the repulsive force of multiple obstacles within its sphere of influence as its distance from the obstacle changes continuously during its movement. At a certain point in time during the movement of the robot, there are n obstacles within its sphere of influence, and the combined potential energy of the force on the robot is given by Equation (17):
According to the potential energy of the resultant force, the resultant force acting on the robot is given by Equation (18):
The bias probability is dynamically adjusted according to the density of obstacles in the current extended region, reducing the bias probability in regions with dense obstacles and increasing it in regions with fewer obstacles. This method effectively reduces the number of collision detection failures and improves the exploration speed. In addition, the bias probability is adjusted according to the distance from the target point, increasing the bias probability when it is farther away from the target point and decreasing the bias probability when it is closer to the target point in order to avoid being blocked by the obstacles near the target point. Each time the sampling point is generated, is biased towards according to different bias probabilities.
Open region, a shorter step size then leads to a shorter extension distance, which significantly slows down the exploration speed [
35]. This study dynamically adjusts the extension step length according to the local obstacle density at
. The step length decreases in regions with high localized obstacle density, while the step length increases in regions with low localized obstacle density. The dynamic step size
is calculated as:
In Equation (19): is the maximum step size and is the minimum step size.
- (1)
Introduction of Rectangular Obstacles
The morphology of obstacles is usually not completely circular or simple geometric shapes. Rectangle is one of the most common obstacle morphology; the introduction of rectangular obstacle processing makes it possible to consider the shape of obstacles more reasonably when selecting the motion trajectory, and more accurate collision detection can be carried out to avoid collision between the robot and these obstacles.
Assuming that the width of the rectangular obstacle is
, the height is
, the coordinates of the rectangular center are
, and the coordinates of the current sampling point
are
, then the distance
between the current sampling point
and the rectangular obstacle is Equation (20):
Determine whether the sampling point is located inside the rectangular obstacle according to Equation (21):
When the distance between the robot and the rectangular obstacle is less than the range of influence
, the repulsive force formula for the rectangular obstacle is given by Equation (22):
- (2)
Dynamically adjusting the coefficient of attraction
Dynamically adjusting the coefficient of attraction can change the strategy according to the relative position of the robot and the target point and the complexity of the environment, so as to improve the efficiency of the movement and guarantee the safety. When there are fewer obstacles, the coefficient of attraction is raised to twice the original, and the robot can approach the target quickly. When there are more obstacles, the robot needs to avoid the obstacles more cautiously, and the attractiveness coefficient is appropriately reduced to 0.7 times of the original attractiveness coefficient to enhance the weight of repulsion.
In the navigation function package, the tf transformation is first introduced to realize the data transformation between the central coordinate system of LiDAR and the central coordinate system of the robot’s bottom [
36]. Then the delay function is introduced and the navigation module is later than the map building module. This allows the navigation module to make an initial path planning based on the environment information. The navigation framework is shown in
Figure 6.
4. Conclusions
With the widespread adoption of intelligent technologies in animal husbandry, traditional manual feeding methods can no longer meet the demands for precision and efficiency in modern sheep farming. To address this gap, this study proposes an intelligent robotic feeding system designed to enhance feeding efficiency, reduce labor intensity, and enable precise feed delivery. This system, developed on the ROS platform, integrates LiDAR-based SLAM, YOLOv8s sign recognition, and the bidirectional RRT* algorithm to achieve autonomous feeding in dynamic environments.
Experimental results show that the proposed system achieves centimeter-level accuracy in localization and attitude control, with FAST-LIO2 maintaining an attitude angle error within 1°. Compared to the baseline system, the proposed system reduces node count by 17.67%, shortens path length by 0.58 cm, and reduces computation time by 42.97%. In a 30 kg feeding task, the system demonstrates zero feed wastage, further validating its potential for precise feeding.
However, despite the significant progress achieved by the sheep feeding robot system in indoor operations within the sheep pen, some limitations remain. First, the system relies on LiDAR and cameras for environmental perception, which may encounter stability issues under complex lighting conditions and in extreme environments. Future research could explore the integration of additional sensor types, such as infrared or ultrasonic sensors, to enhance the system’s robustness. Second, while the improved bidirectional RRT* algorithm has enhanced path planning, its efficiency remains low in areas with high-density obstacles. Further optimization of the algorithm is necessary to improve computational efficiency and the responsiveness of path planning. Regarding precise feeding, the current sign recognition model shows reduced accuracy when signs are damaged or obstructed. Future work could consider incorporating image enhancement techniques to improve recognition performance under such conditions. Finally, although a multi-sensor fusion control system was implemented, the system’s real-time performance and stability still need further validation. Future research will focus on improving real-time processing capabilities and ensuring stability in dynamic environments.
Programming a sheep-feeding robot presents greater challenges compared to traditional industrial robots. While industrial robots typically operate in controlled environments, following fixed tasks, sheep-feeding robots must navigate dynamic and unpredictable environments. These include uneven terrain, changing lighting conditions, and moving obstacles, requiring real-time processing of data from multiple sensors such as LiDAR, cameras, and temperature sensors. Furthermore, sheep-feeding robots must handle complex tasks such as dealing with damaged or obstructed signs and performing dynamic obstacle avoidance, placing higher demands on programming. In contrast to industrial robots, sheep-feeding robots need stronger adaptability and more precise control to ensure stable operation in real-time environments.
Overall, programming a sheep-feeding robot is far more complex than programming a traditional industrial robot, involving multi-sensor fusion, real-time decision-making, and efficient operation in uncertain environments. As technology continues to evolve, we believe these challenges will be gradually overcome, promoting the development and application of intelligent farming systems, particularly in the management of large-scale sheep farms, offering significant practical value.