Next Article in Journal
Autogenetic Gravity Center Placement
Previous Article in Journal
CAs-Net: A Channel-Aware Speech Network for Uyghur Speech Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on a Visually Assisted Efficient Blind-Guiding System and an Autonomous Shopping Guidance Robot Arm Adapted to the Complex Environment of Farmers’ Markets

1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
2
Belt and Road Joint Laboratory on Measurement and Control Technology, Huazhong University of Science and Technology, Wuhan 430074, China
3
Institute of Robotics and Machine Intelligence, Faculty of Control, Robotics and Electrical Engineering, Poznan University of Technology, Piotrowo 3a, 60-965 Poznan, Poland
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(12), 3785; https://doi.org/10.3390/s25123785
Submission received: 21 March 2025 / Revised: 6 June 2025 / Accepted: 13 June 2025 / Published: 17 June 2025
(This article belongs to the Section Sensors and Robotics)

Abstract

:
It is great challenge for visually impaired (VI) people to shop in narrow and crowded farmers’ markets. However, there is no research related to guiding them in farmers’ markets worldwide. This paper proposes the Radio-Frequency–Visual Tag Positioning and Automatic Detection (RFTPAD) algorithm to quickly build a high-precision navigation map. It combines the advantages of visual beacons and radio-frequency signal beacons to accurately calculate the guide robot’s coordinates to correct its positioning error and simultaneously perform the task of mapping and detecting information. Furthermore, this paper proposes the A*-Fixed-Route Navigation (A*-FRN) algorithm, which controls the robot to navigate along fixed routes and prevents it from making frequent detours in crowded aisles. Finally, this study equips the guide robot with a flexible robotic arm and proposes the Intelligent-Robotic-Arm-Guided Shopping (IRAGS) algorithm to guide VI people to quickly select fresh products or guide merchants to pack and weigh products. Multiple experiments conducted in a 1600 m2 market demonstrate that compared with the classic mapping method, the accuracy of RFTPAD is improved by 23.9%. What is more, compared with the general navigation method, the driving trajectory length of A*-FRN is 23.3% less. Furthermore, the efficiency of guiding VI people to select products by a robotic arm is 100% higher than that through a finger to search and touch.

1. Introduction

According to WHO statistics, there are currently about 253 million people with visual impairment worldwide, including 35.2 million blind people [1]. Although markets are often places where the visually impaired (VI) shop for essentials, most urban farmers’ markets, especially Chinese wet markets, are characterized by crowds of pedestrians, large spaces, and narrow aisles. It is worth noting that in such a complex environment with no tactile paving, it is a considerable challenge for VI people to walk and shop. Currently, most of the existing indoor guidance studies focus on family rooms, shopping malls, and subway stations [2,3], while there is no research related to farmers’ markets.
Researchers generally employ Simultaneous Localization and Mapping (SLAM) to control indoor guide robots to build navigation maps. In large-scale spaces, it is necessary to combine SLAM with fixed tag technology to supplement environmental information to improve mapping accuracy, such as Bluetooth beacons, Wi-Fi signalers, and Radio-Frequency Identification (RFID) tags [4,5,6]. Ahmetovic et al. [7] developed a shopping mall guide system based on Bluetooth beacon positioning. They employed a multivariate regression method to analyze the impact of shopping mall environments and positioning errors on the visually impaired walking along a planned route and proposed a series of technologies to improve the accuracy of the probabilistic positioning algorithm. Ivanov et al. [8] pasted RFID tags on the doors of hospital wards in order to enable the guide system equipped with tag readers to quickly read the room location information stored in them and correct their own posture. Gomes et al. [9] installed a series of visual tags, Bluetooth beacons, and Near-Field Communication (NFC) tags at different locations in a supermarket and designed a large supermarket guide system with a Wi-Fi positioning method to assist VI individuals in independently searching for products in a supermarket. However, the above common tags are not suitable for application in farmers’ markets. For example, the positioning accuracy of RFID and NFC tags is not high; the routing of Bluetooth and Wi-Fi beacons is cumbersome; and visual tags are easily damaged by water stains and blocked by pedestrians.
The path planning algorithms in existing guide methods are basically direct copies of autonomous driving methods. However, unlike normal people who randomly search for products, VI people prefer to shop along fixed routes to a few familiar stalls in farmers’ markets. Common fixed-route navigation methods include electromagnetic tracking, magnetic tape tracking, infrared tracking, visual tracking, etc. [10]. Škrabánek et al. [11] designed a magnetic-signal-based positioning system that applies low-cost magnetic tapes as cruising routes, detects the strength of a magnetic signal through a three-axis magnetometer sensor, and controls the robot’s forward movement and turning. Cruz et al. [12] designed a restaurant service robot based on infrared tracking, utilizing a black line as the robot’s cruising route in the restaurant, and controlling the robot’s forward movement and turning through a line-tracking algorithm. Nevertheless, the above methods are not suitable for farmers’ markets with complex environments—the cost of laying out and maintaining electromagnetic tracking tracks is high, infrared tracking tracks are easily contaminated by pedestrians’ soles, and these tracking methods do not have the function of avoiding pedestrians.
An autonomous shopping robot is a highly intelligent guide device consisting of a navigation car and a robotic arm. It can independently search and pick up target products for VI people and bring products back to them. Pauly et al. [13] designed a mobile robot for supermarkets and shopping malls. After a customer places orders for a desired product through a shopping interface, it immediately navigates to the target shelf, then identifies the target product by radio-frequency tags, and finally controls a robotic arm to pick up the product and put it into the shopping basket. Nechyporenko et al. [14] designed a product-picking robot arm for online shopping. They integrated the centroid normal method into a dual-arm robot system with two grippers, providing a practical solution to the robot picking problem in unstructured environments. Experimental tests showed that this method can control the robot arm to accurately pick up various products of different shapes. Although there are some research results on autonomous shopping robots, most of them are aimed at supermarkets rather than farmers’ markets. Additionally, these robots usually need to be equipped with a high-precision, high-load, and high-cost robotic arm, which is difficult for lower-paid VI people to afford.
According to the above analysis, it can be concluded that the mapping and navigation methods for guide robots in farmers’ markets are different from general guide methods. Fully considering the characteristics of the geometric layout and product features of farmers’ markets, this paper proposes a Radio-Frequency–Visual Tag Positioning and Automatic Detection (RFTPAD) algorithm to quickly build a high-precision navigation map. It improves mapping accuracy by combining RFID tag positioning and visual positioning methods to automatically calibrate the robot’s posture and improves efficiency by employing a classification model to automatically detect and record a large amount of product information. Furthermore, this paper proposes an A*-Fixed-Route Navigation (A*-FRN) algorithm to prevent guide robots from frequently taking long detours in crowded aisles. It plans a unique fixed navigation route for each large stall and controls the guide robot to walk along this route every time it navigates. Finally, this paper equips a guide robot with a robotic arm and proposes an Intelligent-Robotic-Arm-Guided Shopping (IRAGS) algorithm that combines deep learning, kinematic solving, and rapidly exploring random tree methods to intelligently control the robotic arm to guide VI people to quickly select fresh products.

2. Experimental Equipment

Figure 1 shows the hardware devices of the market guide robot designed in this paper, which consists of two parts—a navigation car and robotic arm guidance system. Its parameters are as follows: length × width × height is 35.4 × 35.4 × 160 cm3, the weight is 10.5 kg, and the speed is 0.65 m/s.
The navigation car is mainly composed of the following components: mobile chassis, depth camera, industrial computer, RFID reader, and wooden handle. Among them, the mobile chassis can move flexibly in narrow shopping aisles and measure a robot’s posture with encoders and gyroscopes; the depth camera can scan the depth information of geometric environments for functions such as building a map and obstacle avoidance; the RFID reader is used to read the information in the RFID tags; and the wooden handle is used to guide VI people to move forward and turn.
The guidance system is mainly composed of the following components: a robotic arm, visual camera, electric push rod, varistor, and electronic scale. Among them, the robotic arm has the ability of guiding VI people or merchants to select product; the retractable electric push rod can point out products; the visual camera is used to locate and identify products, pedestrians, merchants, and VI peoples; and the varistor is used to detect whether the VI people’s finger is holding the robotic arm.
The model number, manufacturer, city, and country of above equipment are shown in Table 1.
The cost of the guide robot is about 21,600 RMB, where the robotic arm is 6000 RMB, mobile chassis is 7000 RMB, depth camera is 1000 RMB, industrial computer is 5000 RMB, RFID reader and tags are 1200 RMB, electric push rod is 1000 RMB, and other parts are 400 RMB.

3. High-Precision Mapping and Intelligent Navigation Strategy

3.1. Synchronous High-Precision Mapping and Automatic Product Detection

As mentioned in the Introduction, employing fixed tag technology to supplement environmental information is a common method to improve positioning accuracy. RFID positioning has the characteristics of low cost, easy deployment, and strong anti-pollution ability. Assume that the received signal strength index of the RFID tag read by the tag reader is RSSI at time t. The RSSI value can be converted to the distance value according to Equation (1) [15,16]:
d = 10 ABS RSSI A / 10 × n
where A is the signal strength value when the tag is 1 m away from the reader, A = −45.8; n is the signal attenuation coefficient, n = 3; and ABS is the symbol for the absolute value.
Although RFID positioning technology is relatively mature, the RFID tag is easily blocked and reflected by obstacles in the environment and interfered with by wireless signals. The measured RSSI value is inconsistent with the actual RSSI, which leads to non-negligible errors in the RFID positioning process.
The principle of the visual positioning method based on artificial beacons is to place a beacon with known information at a fixed point and use a visual camera to detect the relative position of the robot and the beacon. Visual cameras have the characteristics of rich information collection, good robustness, and strong anti-interference. Nevertheless, the visual beacons placed in farmers’ markets are easily covered by stains or blocked by pedestrians [17].
This paper combines the advantages of artificial beacons and the RFID localization method to design a RFTPAD algorithm, which can accurately calculate the robot’s position and correct accumulated positioning errors in robot mapping processes. The algorithm process is as follows: RFTPAD firstly employs a general Cartographer method [18,19] to control the guide robot to build a map. Simultaneously, it runs the RFID reader to receive the nearest RFID tag signal and obtain its posture stored in advance. Subsequently, it controls the robot automatically approaching the RFID tag. Finally, it applies the visual positioning method to calculate the robot’s posture and correct the accumulated positioning error.
Additionally, due to the wide variety of commodities in farmers’ markets, it is time-consuming and labor-intensive to manually count their names and coordinates. Thus, this paper sets up RFTPAD to call a classification model to automatically detect a product’s categories and coordinates while it controls the guide robot to build map.

3.1.1. Position Calibration Method Based on RFID Tags

This paper sets three coordinate systems based on the principle of perspective geometry [20], as shown in Figure 2, setting a two-dimensional image physical coordinate system OiXiYi, whose origin Oi is the image plane center, and the Xi-axis and Yi-axis are parallel to the two vertical edges of the image, respectively; setting a three-dimensional camera coordinate system OcXcYcZc, whose origin Oc is the camera center, Yc-axis coincides with the camera optical axis and intersects with the image plane center Oi, and Xc-axis and Zc-axis are parallel to the two image vertical edges, respectively; and setting map coordinate system OXoYoZo, whose origin O is the starting point of the map, Xo-axis and Yo-axis are parallel to the ground plane, and Zo-axis is perpendicular to the ground plane.
Given the distance s between the camera center and RFID tag, as well as the image coordinates (xi, yi) of the RFID tag center, it can be inferred that the coordinates (xc, yc, zc) of the RFID tags relative to the robot in the camera coordinate system are as follows:
x c = x i z c = y i y c = s 2 x i 2 y i 2
Assuming the transformation matrix from the camera coordinate system to the map coordinate system is as follows:
T = cos θ sin θ 0 x sin θ cos θ 0 y 0 0 1 0 0 0 0 1
where x and y are the robot coordinates on the map, and θ is the robot angle relative to origin O.
Then, the coordinates No (xo, yo, zo) of the RFID tag in the map coordinate system can be calculating as follows:
x o y o z o 1 = cos θ sin θ 0 x sin θ cos θ 0 y 0 0 1 0 0 0 0 1 x c y c z c 1   = x i cos θ s 2 x i 2 y i 2 sin θ + x x i sin θ + s 2 x i 2 y i 2 cos θ + y y i 1
Given that the real coordinates of the RFID tags in the map coordinate system are N (xn, yn, zn), if there is no positioning error during the robot mapping process, then No and N are the same. Thus, the following equation holds true:
x o y o z o 1 x n y n z n 1 = x i cos θ s 2 x i 2 y i 2 sin θ + x x n x i sin θ + s 2 x i 2 y i 2 cos θ + y y n y i z n 0 = 0 0 0 0
Due to θ = arctan (y/x), a binary equation about the unknown quantities x and y can be obtained from (5):
x i cos θ s 2 x i 2 y i 2 sin θ + x x n = 0 x i sin θ + s 2 x i 2 y i 2 cos θ + y y n = 0
By solving this equation, the exact robot coordinates (x, y) in the map coordinate system can be determined.

3.1.2. Detection of Product Information Based on Transfer Learning

Transfer learning based on deep learning can quickly obtain a new model with high identification accuracy by simply adjusting the parameters of a pre-trained model [21]. This study employs transfer learning to build a product identification model. The pre-trained model selected in this paper is MobileNetV2, which is a convolutional network model with the ability of accurately identifying 1000+ types of objects [22]. The network architecture of the model trained by transfer learning is consistent with the pre-trained model. Table 2 shows the network structure configuration diagram of the product identification model.
The process of building the product identification model is shown in Figure 3. Firstly, RFTPAD calls the network of MobileNetV2 as the feature extractor for the new model; then, it modifies the fully connected layer of MobileNetV2 into a 394-category classifier; and finally, it inputs the self-made dataset and trains only the classifier parameters to obtain a preliminary new model. The dataset contains various samples of common products in farmers’ markets, and each sample contains 5000 pictures.

3.2. Fixed-Route Navigation Strategy Based on A*-FRN Method

As mentioned in the introduction, general navigation methods are not suitable for farmers’ markets with complex geometry and shopping environments, as well as the shopping habits of VI people. By referring to the A* method [23,24] and classical fixed-route navigation methods [10,25], this paper proposes an A*-FRN algorithm to prevent robots from frequently taking detours in crowded aisles. Compared with general fixed-route cruising methods, it does not require laying a tracking route and has the advantages of low cost, high accuracy, and flexibly avoiding pedestrians.

3.2.1. A*-FRN Algorithm

When there are too many pedestrians in aisles causing guide robots to fail to avoid obstacles, the A* method will abandon the optimal global navigation route and plan a longer route. To address this problem, this study proposes the A*-FRN algorithm, which plans a unique fixed navigation route for each large stall and controls the guide robot to walk along this route every time it navigates. The process of A*-FRN is as follows: A*-FRN firstly divides a map into major areas such as vegetable, fruit, meat, seafood, grain, and oil ones according to product categories and names each large stall in each major area in numerical order (see the numbers in the rectangles in Figure 4); subsequently, it employs the A* method to plan the shortest route from the market gate to the No. 1 large stall in each major area and names it in alphabetical order (see green line in Figure 4); furthermore, it plans the shortest route from the No. 1 large stall in each major area to other large stalls in this area and names each route in the large stall number N (see the non-green line in Figure 4); and finally, by combining these routes, all the fixed routes can be obtained from the gate to all the large stalls in each major area.
Figure 5 is a schematic diagram of A*-FRN controlling the robot navigating to target stall 2 in fruit major areas. It first calculates the optimal point x from the robot’s location to fixed route a; subsequently, it employs the PID method [26,27] to control it to navigate to point x; then, it controls it to drive along route a and route 2 to target stall 2; and finally, it utilizes the A* method to navigate to the position where the target product is located. The method for calculating point x is as follows:
1.
Calculate the route length Lrm from the robot to any point m on fixed route a with the A* method;
2.
Then, calculate the route length Lmn from point m along route a and route 2 to target stall 2;
3.
Add Lrm and Lmn to obtain the total length Lsmn = Lrm + Lmn;
4.
Compare Lsmn with the smallest value. The corresponding point m is optimal point x.
Figure 5. The schematic diagram of the A*-FRN controlling robot. The lines represent the routes planned by A*-FRN. The black dot represents robot and other dots represent the end point of the routes.
Figure 5. The schematic diagram of the A*-FRN controlling robot. The lines represent the routes planned by A*-FRN. The black dot represents robot and other dots represent the end point of the routes.
Sensors 25 03785 g005
Particularly, since pedestrians may stand on the planned fixed route, it is impossible for the guide robot to walk completely according to the planned route. This study sets up A*-FRN to employ the DWA method [28] to control the robot to avoid obstacles. Then, the robot does not need to return to the fixed route as long as it is in the same channel as the fixed route. Subsequently, A*-FRN employs the A* method to plan a dynamic route for it to move forward, which starts from the robot’s current position and ends at the same point as the fixed route in the current aisle. Additionally, the A*-FRN algorithm will control the guide robot to stop moving forward if the channel is completely blocked. It continues to control it to navigate along a fixed route until the passage is clear and there is enough space for the robot to pass through.

3.2.2. A* Method

A* is a classical path planning method, which has the advantages of a simple search path and a rapid response to the environment. A* divides a map into N nodes and calculates the cost value from the current node to the next node through a cost function [29,30]. The node with the smallest cost value is the next search node, and ultimately connecting all the selected nodes will result in an optimal path. The cost function of the A* algorithm is as follows:
f n = g n + h n
where f(n) represents the cost value from the initial node to the target node, g(n) represents the actual cost value from the initial node to the current node, and h(n) represents the estimated cost value from the current node to the target node.

3.3. Intelligent Guidance and Autonomous Shopping Strategy Based on Robotic Arm

Unlike general guide robots that only have simple navigation and obstacle avoidance functions, this study equips a 6-degree-of-freedom flexible robotic arm with light weight (1.1 kg) and low cost (6000 RMB) for a guide robot in a farmers’ market and designs an IRAGS algorithm for controlling it. It not only could intelligently guide VI people to quickly select fresh fruits and vegetables but also could combine with a navigation car to enable autonomous shopping.

3.3.1. Intelligent Guidance for VI People to Select Fresh Products

The process of the IRAGS algorithm guiding VI people to select fresh fruits and vegetables is as follows: firstly, it employs a visual camera to identify relatively fresh target products; subsequently, it controls the robotic arm to touch VI people’s fingers and prompts them through voice to grasp the end effector of the robotic arm; furthermore, it controls the robotic arm to pull VI people’s fingers to the surface center of the target product; and finally, after VI people grab the product, it will control the robot arm to pull their finger to place the selected product on the robot’s built-in electronic scale.
(a)
Control the motion of the robotic arm
The Denavit–Hartenberg (D–H) method is a classic method for kinematic modeling and analysis of robotic arms, which can describe the coordinate system relationship and geometric coefficients between adjacent links [31]. This paper applies the D–H method to construct the link coordinate system of the mycobot_280_M5 robotic arm, as shown in Figure 6.
According to the link coordinate system constructed above, the D–H parameters of each joint of mycobot_280_M5 can be calculated, as shown in Table 3. ai is the link length, which represents the distance from the zi axis to the zi + 1 axis along the xi axis. αi is the link twist, which represents the angle of rotation from the zi axis to the zi + 1 axis along the xi axis. di is the link offset, which represents the distance from the xi − 1 axis to the xi axis along the zi axis. θi is the joint angle, which represents the angle of rotation from the xi − 1 axis to the xi axis along the zi axis.
The D–H coordinate transformation matrix between the two adjacent links is as follows:
T i   i 1 = R o t Z i 1 , θ i T r a n s Z i 1 , d i T r a n s X i , a i R o t X i , α i = cos θ i sin θ i cos α i sin θ i sin α i a i cos θ i sin θ i cos θ i cos α i cos θ i sin α i a i sin θ i 0 sin α i cos α i d i 0 0 0 1
According to the parameters in Table 3 and Equation (8), the transformation matrix between the two adjacent links can be calculated as follows:
    T 1 0 = c 1 0 s 1 0 s 1 0 c 1 0 0 1 0 d 1 0 0 0 1         T 2 1 = c 2 s 2 0 0 s 2 c 2 0 0 0 0 1 0 0 0 0 1     T 3 2 = c 3 0 s 3 a 3 c 3 s 3 0 c 3 a 3 s 3 0 1 0 0 0 0 0 1 T 4 3 = c 4 s 4 0 0 s 4 c 4 0 0 0 0 0 d 4 0 0 0 1       T 5 4 = c 5 0 s 5 0 s 5 0 c 5 0 0 1 0 0 0 0 0 1     T 6 5 = c 6 s 6 0 0 s 6 c 6 0 0 0 0 1 d 6 0 0 0 1
where ci is the abbreviation of cosθi, and si is the abbreviation of sinθi.
The D–H transformation matrix of the end of the robot arm relative to the base coordinate system is as follows:
T 6 0 =   1 0 T 2 1 T 3 2 T 4 3 T 5 4 T 6 5 T = n x o x a x p x n y o y a y p y n z o z a z p z 0 0 0 1
where (nx, ny, nz) T, (ox, oy, oz) T, and (ax, ay, az) T represent the posture vectors of the end of the robot, and px, py, and pz represent the position vectors of the end of the robot. Their specific values are as follows:
n x = c 6 c 1 s 5 s 23 + c 5 s 1 s 4 + c 1 c 4 c 23 + s 6 c 4 s 1 c 1 c 23 s 4
n y = c 6 s 1 s 5 s 23 + c 5 c 1 s 4 + s 1 c 4 c 23 + s 6 c 4 c 1 + s 1 c 23 s 4
n z = s 4 s 6 s 23 c 6 c 23 s 5 + c 4 c 5 s 23
o x = s 6 c 1 s 5 s 23 + c 5 s 1 s 4 + c 1 c 4 c 23 + c 6 c 4 s 1 c 1 c 23 s 4
o y = s 6 s 1 s 5 s 23 + c 5 c 1 s 4 + s 1 c 4 c 23 + c 6 c 1 c 4 c 23 s 1 s 4
o z = s 6 c 23 s 5 + c 4 c 5 s 23 + c 6 s 4 s 23 a x = s 5 s 1 s 4 + c 1 c 4 c 23 + c 1 c 5 s 23
a y = s 5 c 1 s 4 + c 4 c 23 s 1 c 5 s 1 s 23 a z = c 4 s 5 s 23 c 5 c 23
p x = c 1 a 2 c 2 + a 3 c 23 d 6 s 5 s 1 s 4 + c 1 c 4 c 23 + c 1 s 23 c 5 c 1 s 23 d 4
p y = s 1 a 2 c 2 + a 3 c 23 d 6 s 5 c 1 s 4 + s 1 c 4 c 23 + s 1 s 23 c 5 s 1 s 23 d 4
p z = d 1 c 23 d 4 a 2 s 2 a 3 s 23 d 6 c 5 c 23 c 4 s 5 s 23
where s23 and c23 are the abbreviations of sin(θ2 + θ3) and cos(θ2 + θ3), respectively.
The inverse kinematic solution is to calculate the value of each joint variable based on the known position of the end of the robot arm. The central axes of the last three joints of the mycobot_280_M5 robotic arm intersect at one point, which satisfies the Pieper criterion. Therefore, the inverse kinematics of mycobot_280_M5 can be solved by an analytical method. The specific calculation method can be found in reference [32,33].
The rapidly exploring random tree is a sample collection-based route planning method with advantages of high efficiency, simplicity, and great completeness, and it is suitable for various types of multi-degree-of-freedom robotic arms [34]. This paper sets up IRAGS to apply the rapidly exploring random tree to accurately control the motion of the robotic arm. Figure 7 shows IRAGS planning a route from the start point to the target point and controlling the robot arm moving along the route in the ROS (version is Kinetic Kame) simulation tool Rviz (version 1.12.17).
(b)
Locating products and VI people’s fingers based on monocular visuals
Monocular visual localization with advantages of low cost, low power consumption, and small computational complexity is commonly applied for locating object coordinates. This study sets the IRAGS algorithm to employ monocular visual localization based on the triangulation method [35,36] to detect the coordinates of VI people’s fingers and products. As shown in Figure 8, it determines the depth pixel point P by observing the angle between the same point at two different positions (O1, O2). After determining one side length and two angles of the triangle, the position of target point P can be calculated. Assuming the coordinate of P in the image physical coordinate system (see Section 3.1.1) is (x, y), its value can be directly obtained from the camera output. Assuming that the Yc coordinate of P in the camera coordinate system (see Section 3.1.1) is yc, its value can be calculated by the triangulation method. Then, point P can be reached by controlling the robotic arm to simultaneously move the distance |yc| along the Yc-axis in the camera coordinate system and move the distances |x| and |y| along the Xi-axis and Yi-axis in the image physical coordinate system, respectively.
(c)
Identify fresh products based on transfer learning
This study employs transfer learning to build an identification model to distinguish fresh fruits and vegetables. The specific method is shown above in Section 3.1.2. Since the product classification model that has been trained in Section 3.1.2 can identify various types of fruits and vegetables, this study applies it as a pre-training model. The self-made training dataset contains 394 types of fresh fruit and vegetable samples with a smooth appearance, obvious color, and standard outline, as well as 394 types of fruit and vegetable samples in various un-fresh degrees (such as rotten, bumped, with obvious spots, etc.). This was obtained by web crawling, and there are 5000 pictures of each type of fruit and vegetable.

3.3.2. Robot Autonomous Shopping Strategy

During peak shopping hours, the narrow aisles may be so crowded that it is difficult for VI people to pass through. The IRAGS algorithm can also control robots to shop autonomously, saving the physical strength of VI people and meeting their needs for remote shopping.
The schematic diagram of controlling robots to shop autonomously with the IRAGS algorithm is shown in Figure 9. Firstly, it runs the pedestrian detection model based on Faster R-CNN [37] to detect the number of pedestrians within 3 m in front of the robot. IRAGS will run the A*-FRN method to control a robot to navigate autonomously if the number exceeds the threshold T. Subsequently, when the robot arrives at a destination, IRAGS immediately employs the above monocular visual localization method to detect the coordinates of each person near the target stall. If someone is standing behind the target stall and his coordinates are the most similar one to the target stall coordinates, A*-FRN determines that he is the target merchant. Furthermore, it employs the PID [26,27] method to control the robot to approach the target merchant and controls the end effector of the robotic arm to indicate the target product for him and reminds him by voice to place the product on the robot’s built-in electronic scale. Then, IRAGS sends the calculated total price and the merchant’s payment code to VI people, notifying him to check out. Finally, it drives the robot to bring products back to the VI people.

4. Experiment and Discussion

Figure 10a shows the farmers’ market selected for the experiment, with its detailed address description. The area of the market is about 1600 m2, and the aisle width is 2 m. Figure 10b is a navigation map constructed by the RFTPAD algorithm, where the colored polygons correspond to stalls and other pink areas correspond to shopping aisles. Particularly, the two stalls facing each other in the same channel are 2 m apart, and the signal range of the RFID tag selected in this study is 8 m. Therefore, it can meet the requirements of the RFTPAD algorithm by installing an RFID tag only on one of the two stalls. A total of 56 tags were installed in this farmers’ market.

4.1. Comparative Analysis of Mapping Accuracy and Efficiency with RFTPAD

4.1.1. Accuracy Analysis Between RFTPAD and Cartographer Algorithm

In this trial, eight coordinate points Pm (xm, ym) were randomly selected and input into the robot’s navigation target one by one. The maps constructed by RFTPAD and classical Cartographer algorithms [18,19] were used for navigation, respectively, and the navigation data were recorded to compare their accuracy. Assuming that the absolute navigation error σ1 is the absolute value of the difference between the robot’s theoretical navigation distance Sm and the actual driving distance Sn, then that is σ1 = |SmSn|, and the average error value σ2 is the arithmetic mean of eight absolute errors.
The navigation error data obtained from experimental statistics are shown in Table 4. The third and fourth columns represent the actual driving distance of the robot with the Cartographer algorithm and RFTPAD, respectively, while the fifth and sixth columns represent the absolute navigation error of the robot with the Cartographer algorithm and RFTPAD, respectively. Comparing the fifth and sixth columns, it can be observed that as the navigation distance increases, the error of the map constructed with RFTPAD is consistently smaller than that of those constructed with the Cartographer algorithm. Furthermore, by calculating the data in the table, the average error values of the robot with RFTPAD and the Cartographer algorithm are σ2R = 0.051 m and σ2C = 0.067 m, respectively. (σ2Cσ2R)/σ2C × 100% = 23.9%, which means that compared to the Cartographer algorithm, RFTPAD reduces the map error by 23.9%.

4.1.2. Synchronize Detection of Product Information

The experiment first inputs the image dataset into the product identification model for training. The dataset contains 394 category samples of common products in farmers’ markets, and each sample contains 5000 pictures. Figure 11 shows the accuracy change during model training. Observing the right side of the figure, it can be found that the recognition accuracy of the new model finally stabilized at 95% when the model was trained to convergence.
Figure 12 shows some products detected by RFTPAD while the robot is building a map. The percentage represents the identification accuracy, and the numbers in brackets represent the product coordinates (unit is m). By comparing the product’s actual categories in the figure with the results identified by RFTPAD, it can be seen that the identified results are completely correct, and its accuracy rate reaches more than 95%.
Subsequently, the experiment set up the robot to record product information (category and coordinates) with RFTPAD’s automatic detection method and manual counting method, respectively. A total of six trials were conducted, and the area of the trial site was gradually increased. The steps of the manual counting method are as follows: the mapping personnel manually control the guide robot to go to each type of product, manually check its coordinates, and record them as the product coordinates.
Figure 13 is a statistical chart showing the time taken to detect and record all the product information in the farmers’ markets with the above two methods, respectively. The horizontal axis represents the trial site area; the red and blue columns represent RFTPAD and the manual counting method, respectively; and the cyan column and red line represent their difference. By comparing the red and blue columns as a whole, it is obvious that no matter how large the trial site area is, the time spent with the RFTPAD algorithm is less than that with the manual counting method. By calculating the data in the figure, it can be concluded that the average time of RFTPAD is 36.3% less than that of the manual counting method. By observing the red trend line in the figure, it can be found that the difference in the time taken by the two methods gradually increases as the area of the trial site increases, which indicates that the RFTPAD algorithm has an obvious advantage in large farmers’ markets with a wide variety of products.

4.2. Fixed-Route Navigation Trial with A*-FRN Algorithm

The colored lines in Figure 14 are the fixed routes planned by A*-FRN on a map. Among them, the green lines represent the fixed routes from the gate to the first large stall in each major area, totaling six routes; the other colored lines represent the fixed routes from the first large stall in each major area to the other large stalls in this area, totaling 22 routes; and there are a total of 28 routes.

4.2.1. Analysis of Robot’s Detour Behavior When Aisle Is Crowded

The trial set up the guide robot to navigate in crowded aisles with the A*-FRN algorithm and the A*-DWA method [23,28,30] (A* and DWA algorithm, a classic navigation method combination), respectively. Figure 15 shows the robot’s driving trajectory with the above different methods. It can be seen from Figure 15a that the robot’s initial trajectory (see yellow line) was about to reach the destination, but then it turned away from the destination and took a long detour (see red line). The detour behavior of the robot with the A*-DWA method shows that it did not follow the optimal route. Compared with Figure 15a, it can be found that the trajectory line of the robot with A*-FRN in Figure 15b approaches the destination very directly, and the overall trajectory length is much shorter. By measuring the trajectory lines, it can be concluded that the driving trajectory of the robot with A*-FRN is about 8 m shorter than that of the robot with A*-DWA.

4.2.2. Analysis of Robot’s Driving Trajectory Length When Aisle Is Crowded

Then, the trial set up the robot to navigate back and forth between the starting point and destination and employed the A*-FRN algorithm and A*-DWA method to control the robot to navigate, respectively, and it recorded its driving trajectory data. Each algorithm was tested 20 times, and the navigation distance was set to increase gradually. Furthermore, the experiment set the robot to run the A*-DWA method when it navigated from the starting point to the destination and run A*-FRN when it returned from the destination to the starting point.
The statistical results of the robot’s driving trajectory length are shown in Figure 16. The blue curve and red curve represent the robot’s driving trajectory length with the A*-DWA method and A*-FRN algorithm, respectively. It can be clearly observed that the blue curve is always above the red curve, which means that when the navigation target is the same, the robot’s overall driving length is shorter when it runs A*-FRN than when it runs A*-DWA. Moreover, it can also be observed that in the 2nd, 5th, 9th, 12th, and 16th trials, the ordinate values corresponding to the blue curve are significantly higher than those of the red curve. Based on the analysis above, we can conclude that the reason for the obvious gap between the two is as follows: when the guide robot runs A*-FRN to navigate, it can intelligently judge the crowded conditions and adopt the strategy of navigating along a fixed global route without giving up the originally planned shortest path. By calculating the data in the figure, it can be concluded that the average driving trajectory length of the robot with the A*-FRN algorithm is 23.3% less than that with the A*-DWA method.

4.3. Intelligent Guided and Autonomous Shopping Based on Robotic Arm

4.3.1. Intelligent Selection of Fresh Products with the Assistance of a Robotic Arm

Calibration is a prerequisite for the precise control of the robotic arm. The experiment firstly calibrated the robotic arm and camera. This trial utilized the calibration program of mycobot_280_M5 to set the joint zero position and initialize the potential value of the motor. Then, the trial employed Zhang’s calibration method [38,39] to calibrate the camera mounted on the robotic arm. The camera intrinsic parameters and distortion parameters were calculated as follows:
Intrinsic   parameters   = 1193.35 0 634.72 0 132.16 531.59 0 0 1
Distortion   parameters   = 0.026 0.623 0.018 0.025 1.968
Furthermore, the experiment performed hand–eye calibration on the robotic arm and camera and solved the transformation matrix between the camera coordinate system and the base coordinate system. The trial firstly utilized a camera mounted on a robotic arm to take multiple images of the same calibration plate in different postures. Subsequently, it calculated the geometric transformation of the robotic arm and camera between different image frames to obtain the transformation matrix. Figure 17 shows the process of the camera collecting calibration plate data at different positions.
The calculated hand–eye transformation matrix is as follows:
0.853 0.033 0.006 0.042 0.051 0.836 0.031 0.096 0.006 0.063 0.981 0.043 0 0 0 1
Subsequently, the experiment tested the IRAGS algorithm’s ability of accurately detecting un-fresh products. Figure 18 shows several target products identified by the IRAGS algorithm after the guide robot navigated to its destination. As can be seen from the figure, IRAGS correctly selected target products from various fruits with large yellow boxes and marked some un-fresher target products with small red boxes. After a careful observation of the products in small boxes, it was found that they have obvious black spots, an abnormal appearance color, and even damaged outer skin, while the target products not marked with small boxes basically have the characteristics of a smooth appearance, a bright color, and a standard outline. Particularly, the data in the figure show that IRAGS can identify non-fresh products with an accuracy of over 96%.
Next, the trial arranged the VI volunteers to select products with the general method (search and touch with fingers) and the assistance of a robotic arm, respectively, and it recorded the time it took to select the fresher products. Although the selected volunteers are completely blind, they have rich experience in selecting commodities. A total of eight trials were conducted, and 2 kg of a certain type of relatively fresh product was selected for each trial. The selected products were apples, bananas, small mangos, dates, potatoes, carrots, green peppers, and cherry tomatoes.
Figure 19 shows the process of the IRAGS controlled robotic arm to pull VI people’s fingers to pick an apple, where Figure 19a shows that IRAGS accurately located and controlled the retractable push rod to touch VI people’s fingers; Figure 19b shows that the robotic arm successfully pulled the VI people’s finger to touch the target product; and Figure 19c shows that the robotic arm guided the VI people’s fingers to place and weigh the selected product on the robot’s built-in electronic scale.
Figure 20 shows the time statistics of the VI people who selected products with the general method and the assistance of the robotic arm, respectively. It can be clearly observed that the time spent by VI people in selecting products with the assistance of the robotic arm is always much less than that of the general method. Specifically, there is a significant difference in the time it took for VI people to select smaller items (mangos, cherry tomatoes, and jujubes) with the two different methods. Additionally, by calculating the data in the figure, it can be concluded that the average time spent by VI people to select various products with the assistance of the robotic arm is 50% of the time spent with the general method.

4.3.2. Autonomous Shopping Based on Robotic Arm

The experiment first conducted multiple robot obstacle avoidance tests in aisles with different crowd densities to determine the threshold T. During the experiment, it was found that when the number of pedestrians within 3 m in front of the robot was greater than eight, the robot frequently turned to avoid pedestrians and even frequently changed the global navigation route. Therefore, the experiment set threshold T to eight.
Subsequently, the trial arranged the guide robot to automatically detect the number of pedestrians within 3 m ahead. The two sub-images in Figure 21 show the congestion status in the aisles detected by the IRAGS algorithm. Among them, the number of pedestrians detected in Figure 21a is five, which is less than the threshold T, and IRAGS determined that this aisle is loose; on the contrary, the number of pedestrians detected in Figure 21b is nine, and IRAGS determined that this aisle is crowded. Furthermore, as can be observed from Figure 21, IRAGS correctly detected every pedestrian in the aisles and counted their number. This is consistent with the actuality, which proves that IRAGS can indeed accurately determine the congestion status in an aisle.
When IRAGS determined that the aisle was crowded, it recommended that the VI people give an instruction to the robot to shop autonomously. Figure 22 shows the target merchants identified by IRAGS, with an identification accuracy of 100%. Furthermore, observing Figure 22b, it can be found that IRAGS accurately selected pedestrians and merchants, indicating that IRAGS can indeed accurately distinguish merchants from the crowd without being disturbed by pedestrians.
Then, the trial set up the guide robot with the control of IRAGS to autonomously approach the target merchant and remind them to select and weigh the product specified by the robotic arm. Figure 23a shows the IRAGS controlled robotic arm pointing out the relatively fresh target product for the merchant; Figure 23b shows the merchant grabbing the product according to the instruction of the robotic arm.

5. Conclusions

This paper systematically designed a novel guide robot for a farmers’ market, which has the capabilities of quickly building high-precision maps, fixed-path navigation, intelligent robotic arm guidance, and autonomous shopping.
Aimed at the complex dynamic environment of farmers’ markets, this study proposes the RFTPAD algorithm, which innovatively employs RFID-based artificial visual tags to calibrate a robot’s posture and applies deep learning to automatically detect and record a large amount of product information. Multiple trials conducted in a 1600 m2 market demonstrate that the accuracy of the map built with RFTPAD is 23.9% higher than that built with the classical Cartographer algorithm. What is more, the average time spent to detect products with RFTPAD is reduced by 36.3% compared with the manual counting method. Additionally, in terms of navigation, this study proposes the A*-FRN algorithm to control the robot to navigate along a fixed route, successfully preventing the robot from frequently taking long detours in a crowded farmers’ market. Compared with the general navigation method, the average driving trajectory length of the robot with A*-FRN is reduced by 23.3%.
Furthermore, in order to improve the robot’s intelligence, this paper innovatively equips the robot with a flexible robotic arm with the characteristics of light weight and low cost, and it specially designs the IRAGS algorithm to control it. In the robot shopping guide trials, the robotic arm successfully assisted VI people in selecting fresh products. Particularly, its efficiency was doubled compared with selecting products using fingers to search and touch. Additionally, in robot autonomous shopping trials, IRAGS successfully controlled the robotic arm to guide merchants to select and weigh products and automatically brought the remotely paid products back to VI people.

Author Contributions

Conceptualization, Y.C. and M.L.; methodology, Y.C. and M.L.; software, Y.C., J.R. and J.C.; validation, Y.C.; formal analysis, Y.C., J.R. and J.C.; investigation, Y.C., Z.W. and M.L.; resources, Y.C. and M.L.; data curation, Y.C. and W.G.; writing—original draft preparation, Y.C.; writing—review and editing, Y.C., W.G. and M.L.; Y.C., J.R. and J.C.; supervision, Y.C. and M.L.; project administration, Y.C. and M.L.; funding acquisition, Y.C. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Open Foundation of Belt and Road Joint Laboratory on Measurement and Control Technology, Huazhong University of Science and Technology (MCT202306).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the author Yunhua Chen.

Acknowledgments

We thank Xiaozhen Chen (A staff member of the Baishou Town government, Yongfu County, Guilin, China) for her help at all stages of the project and her support in donating some core experimental equipment.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zou, W.; Hua, G.; Zhuang, Y.; Tian, S. Real-time passable area segmentation with consumer RGB-D cameras for the Visually Impaired. IEEE Trans. Instrum. Meas. 2023, 72, 2513011. [Google Scholar] [CrossRef]
  2. Plikynas, D.; Žvironas, A.; Gudauskis, M.; Budrionis, A.; Daniušis, P.; Sliesoraitytė, I. Research advances of indoor navigation for blind people: A brief review of technological instrumentation. IEEE Instrum. Meas. Mag. 2020, 23, 22–32. [Google Scholar] [CrossRef]
  3. Wang, J.; Liu, E.; Geng, Y.; Qu, X.; Wang, R. A survey of 17 indoor travel assistance systems for blind and visually impaired people. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 134–148. [Google Scholar] [CrossRef]
  4. Berka, J.; Balata, J.; Mikovec, Z. Optimizing the number of bluetooth beacons with proximity approach at decision points for intermodal navigation of blind pedestrians. In Proceedings of the 2018 Federated Conference on Computer Science and Information Systems (FedCSIS), Poznan, Poland, 9–12 September 2018. [Google Scholar]
  5. Kunhoth, J.; Karkar, A.G.; Al-Maadeed, S.; Al-Attiyah, A. Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments. Int. J. Health Geogr. 2019, 18, 29. [Google Scholar] [CrossRef]
  6. AL-Madani, B.; Orujov, F.; Maskeliūnas, R.; Damaševičius, R.; Venčkauskas, A. Fuzzy logic type-2 based wireless indoor localization system for navigation of visually impaired people in buildings. Sensors 2019, 19, 2114. [Google Scholar] [CrossRef]
  7. Ahmetovic, D.; Gleason, C.; Ruan, C.; Kitani, K.; Asakawa, C. NavCog: A navigational cognitive assistant for the blind. In Proceedings of the 18th International Conference on Human Computer Interaction with Mobile Devices and Services, New York, NY, USA, 6 September 2016. [Google Scholar]
  8. Ivanov, R. Indoor navigation system for visually impaired. In Proceedings of the 11th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing on International Conference on Computer Systems and Technologies, Sofia, Bulgaria, 17–18 June 2010. [Google Scholar]
  9. Gomes, J.P.; Sousa, J.P.; Cunha, C.R.; Morais, E.P. An indoor navigation architecture using variable data sources for blind and visually impaired persons. In Proceedings of the 2018 13th Iberian Conference on Information Systems and Technologies (CISTI), Caceres, Spain, 13–16 June 2018. [Google Scholar]
  10. Zhang, J.; Yang, X.; Wang, W.; Guan, J.; Ding, L.; Lee, V. Automated guided vehicles and autonomous mobile robots for recognition and tracking in civil engineering. Autom. Constr. 2023, 146, 104699. [Google Scholar] [CrossRef]
  11. Škrabánek, P.; Vodička, P. Magnetic strips as landmarks for mobile robot navigation. In Proceedings of the 2016 International Conference on Applied Electronics (AE), Pilsen, Czech Republic, 6–7 September 2016. [Google Scholar]
  12. Cruz, J.D.; Domingo, C.B.; Garcia, R.G. Automated service robot for catering businesses using arduino mega 2560. In Proceedings of the 2023 15th International Conference on Computer and Automation Engineering (ICCAE), Sydney, Australia, 3–5 March 2023. [Google Scholar]
  13. Pauly, L.; Baiju, M.V.; Viswanathan, P.; Jose, P.; Paul, D.; Sankar, D. CAMbot: Customer assistance mobile manipulator robot. In Proceedings of the 2015 IEEE Bombay Section Symposium (IBSS), Mumbai, India, 10–11 September 2015. [Google Scholar]
  14. Nechyporenko, N.; Morales, A.; Cervera, E.; Pobil, A.P.D. A practical approach for picking items in an online shopping warehouse. Appl. Sci. 2021, 11, 5805. [Google Scholar] [CrossRef]
  15. Liu, J. Application of RFID technology in SLAM. South-Cent. Univ. Natl. 2008, 27, 84–87. [Google Scholar]
  16. DiGiampaolo, E.; Martinelli, F.; Romanelli, F. Exploiting the Orientation of Trilateration UHF RFID Tags in Robot Localization and Mapping. In Proceedings of the 2022 IEEE 12th International Conference on RFID Technology and Applications (RFID-TA), Cagliari, Italy, 12–14 September 2022. [Google Scholar]
  17. Kroumov, V.; Okuyama, K. Localisation and Position Correction for Mobile Robot using Artificial Visual Landmarks. Int. J. Adv. Mechatron. Syst. 2012, 4, 212–217. [Google Scholar] [CrossRef]
  18. Dwijotomo, A.; Rahman, M.A.A.; Ariff, M.H.M.; Zamzuri, H.; Azree, W.M.H.W. Cartographer SLAM method for optimization with an adaptive multi-distance scan scheduler. Appl. Sci. 2020, 10, 347. [Google Scholar] [CrossRef]
  19. Gao, Q.; Jia, H.; Liu, Y.; Tian, X. Design of mobile robot based on cartographer slam algorithm. In Proceedings of the 2019 2nd International Conference on Informatic, Hangzhou, China, 6 May 2019. [Google Scholar]
  20. Zhang, F.; Zheng, S.; He, Y.; Shao, X. The research on attitude correction method of robot monocular vision positioning system. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016. [Google Scholar]
  21. Venkateswara, H.; Chakraborty, S.; Panchanathan, S. Deep learning system for domain adaptation in computer vision: Learning transferable feature representations. IEEE Signal Process. Mag. 2017, 34, 117–129. [Google Scholar] [CrossRef]
  22. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  23. Gul, F.; Rahiman, W.; Alhady, S.S.N. A comprehensive study for robot navigation techniques. Cogent Eng. 2019, 6, 1632046. [Google Scholar] [CrossRef]
  24. Peng, J.; Huang, Y.; Luo, G. Robot path planning based on improved A* algorithm. Cybern. Inf. Technol. 2015, 15, 171–180. [Google Scholar] [CrossRef]
  25. Ayawli, B.B.K.; Chellali, R.; Appiah, A.Y.; Kyeremeh, F. An overview of nature-inspired, conventional, and hybrid methods of autonomous vehicle path planning. J. Adv. Transp. 2018, 2018, 8269698. [Google Scholar] [CrossRef]
  26. Canadas-Aranega, F.; Moreno, J.C.; Blanco-Claraco, J.L. A PID-based control architecture for mobile robot path planning in greenhouses. IFAC Pap. 2024, 58, 503–508. [Google Scholar] [CrossRef]
  27. Lin, J.; Zheng, R.; Zhang, Y.; Feng, J.; Li, W.; Luo, K. CFHBA-PID algorithm: Dual-loop PID balancing robot attitude control algorithm based on complementary factor and honey badger algorithm. Sensors 2022, 22, 4492. [Google Scholar] [CrossRef] [PubMed]
  28. Molinos, E.J.; Llamazares, A.; Ocaña, M. Dynamic window based approaches for avoiding obstacles in moving. Robot. Auton. Syst. 2019, 118, 112–130. [Google Scholar] [CrossRef]
  29. Guruji, A.K.; Agarwal, H.; Parsediya, D.K. Time-efficient A* algorithm for robot path planning. Procedia Technol. 2016, 23, 144–149. [Google Scholar] [CrossRef]
  30. Prabhu, S.G.R.; Kyberd, P.; Wetherall, J. Investigating an A-star Algorithm-based Fitness Function for Mobile Robot Evolution. In Proceedings of the 2018 22nd International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 10–12 October 2018. [Google Scholar]
  31. Singh, A.; Singla, A.; Soni, S. Extension of DH parameter method to hybrid manipulators used in robot-assisted surgery. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2015, 229, 703–712. [Google Scholar] [CrossRef]
  32. Mohammed, A.; Abdul Ameer, H.R.; Abdul-Zahra, D.S. Design of a Linear Mathematical Model to Control the Manipulator of a Robotic Arm with a Hexagonal Degree of Freedom. In Proceedings of the 2022 3rd Information Technology to Enhance e-Learning and Other Application (IT-ELA), Baghdad, Iraq, 27–28 December 2022. [Google Scholar]
  33. Khan, A.; Xiangming, C.; Xingxing, Z.; Quan, W.L. Closed form inverse kinematics solution for 6-DOF underwater manipulator. In Proceedings of the 2015 International Conference on Fluid Power and Mechatronics (FPM), Harbin, China, 5–7 August 2015. [Google Scholar]
  34. Huang, Y.; Jin, C. Path Planning Based on Improved RRT Algorithm. In Proceedings of the 2023 2nd International Symposium on Control Engineering and Robotics (ISCER), Hangzhou, China, 17–19 February 2023. [Google Scholar]
  35. Konyashov, V.V.; Sergeev, A.S.; Kolganov, O.A.; Fedorov, A.V.; Konyashova, K.A.; Bychenok, V.V. Study of the Applicability of the Method of Optical Triangulation for Evaluation of the Geometric Parameters and Cleanliness of the Surface of Products of Complex Shapes. In Proceedings of the 2023 7th International Conference on Information, Control, and Communication Technologies (ICCT), Astrakhan, Russian Federation, 2–6 October 2023. [Google Scholar]
  36. Fang, S.; Huang, X.; Chen, H.; Xi, N. Dual-arm robot assembly system for 3C product based on vision guidance. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016. [Google Scholar]
  37. Min, F.; Wang, Y.; Zhu, S. People counting based on multi-scale region adaptive segmentation and depth neural network. In Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition, Xiamen, China, 26–28 June 2020. [Google Scholar]
  38. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  39. Yuan, F.; Xia, Z.; Tang, B.; Yin, Z.; Shao, X.; He, X. Calibration accuracy evaluation method for multi-camera measurement systems. Measurement 2025, 242, 116311. [Google Scholar] [CrossRef]
Figure 1. Hardware composition of guide robot.
Figure 1. Hardware composition of guide robot.
Sensors 25 03785 g001
Figure 2. Three coordinate systems based on the principles of perspective geometry.
Figure 2. Three coordinate systems based on the principles of perspective geometry.
Sensors 25 03785 g002
Figure 3. The process of building the product model.
Figure 3. The process of building the product model.
Sensors 25 03785 g003
Figure 4. The fixed route planned by A*-FRN from the gate to the fruit major area. The star represents the market gate. Each rectangle represents a stall, and the same color rectangles represent that they belong to the same areas. The numbers represent the serial numbers of the stalls in each product area. The lines represent the routes planned by A*-FRN, and the dot represents the end point of the route.
Figure 4. The fixed route planned by A*-FRN from the gate to the fruit major area. The star represents the market gate. Each rectangle represents a stall, and the same color rectangles represent that they belong to the same areas. The numbers represent the serial numbers of the stalls in each product area. The lines represent the routes planned by A*-FRN, and the dot represents the end point of the route.
Sensors 25 03785 g004
Figure 6. D-H coordinate system of mycobot_280_M5 robot.
Figure 6. D-H coordinate system of mycobot_280_M5 robot.
Sensors 25 03785 g006
Figure 7. The process of the IRAGS planning route and controlling the robot arm in ROS. The serial numbers ➀–➇ represent the 8 moments when the robot moves along the planned route.
Figure 7. The process of the IRAGS planning route and controlling the robot arm in ROS. The serial numbers ➀–➇ represent the 8 moments when the robot moves along the planned route.
Sensors 25 03785 g007
Figure 8. The schematic diagram of the triangulation method.
Figure 8. The schematic diagram of the triangulation method.
Sensors 25 03785 g008
Figure 9. The process of controlling the robot to shop autonomously with IRAGS.
Figure 9. The process of controlling the robot to shop autonomously with IRAGS.
Sensors 25 03785 g009
Figure 10. Trial site and navigation map: (a) Qixiang Farmers’ Market, address: No. 3399, Qilianshan Road, Baoshan District, Shanghai, China; (b) navigation map. The colored polygons correspond to stalls and other pink areas correspond to aisles. The same color polygons represent that they belong to the same areas.
Figure 10. Trial site and navigation map: (a) Qixiang Farmers’ Market, address: No. 3399, Qilianshan Road, Baoshan District, Shanghai, China; (b) navigation map. The colored polygons correspond to stalls and other pink areas correspond to aisles. The same color polygons represent that they belong to the same areas.
Sensors 25 03785 g010
Figure 11. Training and validation accuracy of the product identification model. The horizontal axis represents the training epoch, and the vertical axis represents the recognition accuracy.
Figure 11. Training and validation accuracy of the product identification model. The horizontal axis represents the training epoch, and the vertical axis represents the recognition accuracy.
Sensors 25 03785 g011
Figure 12. Products detected by RFTPAD.
Figure 12. Products detected by RFTPAD.
Sensors 25 03785 g012
Figure 13. The time taken to record product information with RFTPAD and the manual counting method, respectively.
Figure 13. The time taken to record product information with RFTPAD and the manual counting method, respectively.
Sensors 25 03785 g013
Figure 14. The numbers represent the serial numbers of the stalls in each product area. The fixed routes planned by A*-FRN. The red five-pointed star in the upper right corner represents the starting point of the route, and the colored lines represent the planned fixed route.
Figure 14. The numbers represent the serial numbers of the stalls in each product area. The fixed routes planned by A*-FRN. The red five-pointed star in the upper right corner represents the starting point of the route, and the colored lines represent the planned fixed route.
Sensors 25 03785 g014
Figure 15. The detour behavior of the robot: (a) the robot’s trajectory under A*-DWA control; (b) the robot’s trajectory under A*-FRN control. The black dot represents the robot, the red arrow represents the destination, the blue circles represent the pedestrians detected by the robot, and the colored lines represent the robot’s trajectory.
Figure 15. The detour behavior of the robot: (a) the robot’s trajectory under A*-DWA control; (b) the robot’s trajectory under A*-FRN control. The black dot represents the robot, the red arrow represents the destination, the blue circles represent the pedestrians detected by the robot, and the colored lines represent the robot’s trajectory.
Sensors 25 03785 g015
Figure 16. Robot’s driving trajectory length with different algorithm control, respectively.
Figure 16. Robot’s driving trajectory length with different algorithm control, respectively.
Sensors 25 03785 g016
Figure 17. The process of the camera collecting calibration plate data at different positions: (a) camera is on the right of the calibration plate; (b) camera is on the lower right of the calibration plate; (c) camera is directly above the calibration plate.
Figure 17. The process of the camera collecting calibration plate data at different positions: (a) camera is on the right of the calibration plate; (b) camera is on the lower right of the calibration plate; (c) camera is directly above the calibration plate.
Sensors 25 03785 g017
Figure 18. Un-fresher fruits detected by IRAGS: (a) detected oranges; (b) detected bananas; (c) detected momordica charantia; (d) detected squash. The red boxes represent the un-fresher fruits identified by the IRAGS algorithm, and the numbers above them represent recognition accuracy.
Figure 18. Un-fresher fruits detected by IRAGS: (a) detected oranges; (b) detected bananas; (c) detected momordica charantia; (d) detected squash. The red boxes represent the un-fresher fruits identified by the IRAGS algorithm, and the numbers above them represent recognition accuracy.
Sensors 25 03785 g018aSensors 25 03785 g018b
Figure 19. The process of the robot arm pulling VI people’s fingers to grab an apple: (a) the robotic arm located and touched the VI people’s fingers; (b) the robotic arm pulled the fingers to grab the product; (c) the robotic arm guided the fingers to place and weigh the product.
Figure 19. The process of the robot arm pulling VI people’s fingers to grab an apple: (a) the robotic arm located and touched the VI people’s fingers; (b) the robotic arm pulled the fingers to grab the product; (c) the robotic arm guided the fingers to place and weigh the product.
Sensors 25 03785 g019
Figure 20. The time statistics of the VI people who selected products with the two methods.
Figure 20. The time statistics of the VI people who selected products with the two methods.
Sensors 25 03785 g020
Figure 21. The congestion status in aisles detected by IRAGS. The yellow box represents the pedestrians identified by the IRAGS algorithm, and the number above it represents the recognition accuracy. Num at the top represents the number of identified pedestrians: (a) IRAGS Identifies pedestrians in loose aisle; (b) IRAGS identifies pedestrians in crowded aisle.
Figure 21. The congestion status in aisles detected by IRAGS. The yellow box represents the pedestrians identified by the IRAGS algorithm, and the number above it represents the recognition accuracy. Num at the top represents the number of identified pedestrians: (a) IRAGS Identifies pedestrians in loose aisle; (b) IRAGS identifies pedestrians in crowded aisle.
Sensors 25 03785 g021
Figure 22. Merchants detected by IRAGS: (a) identified merchant without crowd interference; (b) identified merchants from the crowd.
Figure 22. Merchants detected by IRAGS: (a) identified merchant without crowd interference; (b) identified merchants from the crowd.
Sensors 25 03785 g022
Figure 23. IRAGS guided the merchant to select and weigh the product: (a) the robotic arm pointed out the fresher target product; (b) the merchant grabbed the product according to the instruction of the robotic arm.
Figure 23. IRAGS guided the merchant to select and weigh the product: (a) the robotic arm pointed out the fresher target product; (b) the merchant grabbed the product according to the instruction of the robotic arm.
Sensors 25 03785 g023
Table 1. The model number, manufacturer, city, and country of the equipment on the guide robot.
Table 1. The model number, manufacturer, city, and country of the equipment on the guide robot.
Equipment NameModel NumberManufacturerCity and Country
Mobile chassisKobukiYujin Robot Co., Ltd.Seoul, Republic of Korea
Depth cameraKinect V1Microsoft CorporationRedmond, WA, USA
ComputerMN50EVOC Intelligent Co., Ltd.Shenzhen, China
RFID readerNRD909WYUA Co., Ltd. Guangzhou, China
Robotic armmycobot_280_M5Elephant Robotics Co., Ltd.Shenzhen, China
Visual cameracameraHolder_J6Elephant Robotics Co., Ltd.Shenzhen, China
Electric push rodMNTLLONGXIANG Co., Ltd.Changzhou, China
Table 2. Network structure configuration of product identification model. t represents the dimensionality increase ratio of the 1 × 1 convolution in the Inverted Residuals structure, c represents the depth channel of the output feature matrix, n represents the number of bottleneck repetitions, and s represents the stride of the DW convolution in the first bottleneck.
Table 2. Network structure configuration of product identification model. t represents the dimensionality increase ratio of the 1 × 1 convolution in the Inverted Residuals structure, c represents the depth channel of the output feature matrix, n represents the number of bottleneck repetitions, and s represents the stride of the DW convolution in the first bottleneck.
InputOperatortcns
2242 × 3conv2d-3212
1122 × 32bottleneck11611
1122 × 16bottleneck62422
562 × 24bottleneck63232
282 × 32bottleneck66442
142 × 64bottleneck69631
142 × 96bottleneck616032
72 × 160bottleneck632011
72 × 320conv2d 1 × 1-128011
72 × 1280avgpool 7 × 7--1-
1 × 1 × 1280conv2d 1 × 1-k-
Table 3. The D–H parameters of mycobot_280_M5.
Table 3. The D–H parameters of mycobot_280_M5.
Joint iJoint Angle θi (°)Link Offset di (mm)Link Length ai (mm)Link Twist αi (°)
1θ1131.56090
2θ20−110.40
3θ30−960
4θ464.62090
5θ573.180−90
6θ648.600
Table 4. Comparison of navigation errors with RFTPAD and the Cartographer algorithm.
Table 4. Comparison of navigation errors with RFTPAD and the Cartographer algorithm.
GroupSm (m)SnC (m)SnR (m)σ1C (m)σ1R (m)
115.12115.15215.0970.0310.024
222.35322.39622.3890.0430.036
329.46829.41929.4290.0490.039
434.64434.58334.5970.0610.047
538.51738.59538.4560.0780.061
641.37641.46041.4390.0840.063
746.24346.15446.1780.0890.065
855.68955.58755.7230.1020.074
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, M.; Chen, Y.; Rao, J.; Giernacki, W.; Wang, Z.; Chen, J. Research on a Visually Assisted Efficient Blind-Guiding System and an Autonomous Shopping Guidance Robot Arm Adapted to the Complex Environment of Farmers’ Markets. Sensors 2025, 25, 3785. https://doi.org/10.3390/s25123785

AMA Style

Liu M, Chen Y, Rao J, Giernacki W, Wang Z, Chen J. Research on a Visually Assisted Efficient Blind-Guiding System and an Autonomous Shopping Guidance Robot Arm Adapted to the Complex Environment of Farmers’ Markets. Sensors. 2025; 25(12):3785. https://doi.org/10.3390/s25123785

Chicago/Turabian Style

Liu, Mei, Yunhua Chen, Jinjun Rao, Wojciech Giernacki, Zhiming Wang, and Jinbo Chen. 2025. "Research on a Visually Assisted Efficient Blind-Guiding System and an Autonomous Shopping Guidance Robot Arm Adapted to the Complex Environment of Farmers’ Markets" Sensors 25, no. 12: 3785. https://doi.org/10.3390/s25123785

APA Style

Liu, M., Chen, Y., Rao, J., Giernacki, W., Wang, Z., & Chen, J. (2025). Research on a Visually Assisted Efficient Blind-Guiding System and an Autonomous Shopping Guidance Robot Arm Adapted to the Complex Environment of Farmers’ Markets. Sensors, 25(12), 3785. https://doi.org/10.3390/s25123785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop