Abstract
It is great challenge for visually impaired (VI) people to shop in narrow and crowded farmers’ markets. However, there is no research related to guiding them in farmers’ markets worldwide. This paper proposes the Radio-Frequency–Visual Tag Positioning and Automatic Detection (RFTPAD) algorithm to quickly build a high-precision navigation map. It combines the advantages of visual beacons and radio-frequency signal beacons to accurately calculate the guide robot’s coordinates to correct its positioning error and simultaneously perform the task of mapping and detecting information. Furthermore, this paper proposes the A*-Fixed-Route Navigation (A*-FRN) algorithm, which controls the robot to navigate along fixed routes and prevents it from making frequent detours in crowded aisles. Finally, this study equips the guide robot with a flexible robotic arm and proposes the Intelligent-Robotic-Arm-Guided Shopping (IRAGS) algorithm to guide VI people to quickly select fresh products or guide merchants to pack and weigh products. Multiple experiments conducted in a 1600 m2 market demonstrate that compared with the classic mapping method, the accuracy of RFTPAD is improved by 23.9%. What is more, compared with the general navigation method, the driving trajectory length of A*-FRN is 23.3% less. Furthermore, the efficiency of guiding VI people to select products by a robotic arm is 100% higher than that through a finger to search and touch.
1. Introduction
According to WHO statistics, there are currently about 253 million people with visual impairment worldwide, including 35.2 million blind people [1]. Although markets are often places where the visually impaired (VI) shop for essentials, most urban farmers’ markets, especially Chinese wet markets, are characterized by crowds of pedestrians, large spaces, and narrow aisles. It is worth noting that in such a complex environment with no tactile paving, it is a considerable challenge for VI people to walk and shop. Currently, most of the existing indoor guidance studies focus on family rooms, shopping malls, and subway stations [2,3], while there is no research related to farmers’ markets.
Researchers generally employ Simultaneous Localization and Mapping (SLAM) to control indoor guide robots to build navigation maps. In large-scale spaces, it is necessary to combine SLAM with fixed tag technology to supplement environmental information to improve mapping accuracy, such as Bluetooth beacons, Wi-Fi signalers, and Radio-Frequency Identification (RFID) tags [4,5,6]. Ahmetovic et al. [7] developed a shopping mall guide system based on Bluetooth beacon positioning. They employed a multivariate regression method to analyze the impact of shopping mall environments and positioning errors on the visually impaired walking along a planned route and proposed a series of technologies to improve the accuracy of the probabilistic positioning algorithm. Ivanov et al. [8] pasted RFID tags on the doors of hospital wards in order to enable the guide system equipped with tag readers to quickly read the room location information stored in them and correct their own posture. Gomes et al. [9] installed a series of visual tags, Bluetooth beacons, and Near-Field Communication (NFC) tags at different locations in a supermarket and designed a large supermarket guide system with a Wi-Fi positioning method to assist VI individuals in independently searching for products in a supermarket. However, the above common tags are not suitable for application in farmers’ markets. For example, the positioning accuracy of RFID and NFC tags is not high; the routing of Bluetooth and Wi-Fi beacons is cumbersome; and visual tags are easily damaged by water stains and blocked by pedestrians.
The path planning algorithms in existing guide methods are basically direct copies of autonomous driving methods. However, unlike normal people who randomly search for products, VI people prefer to shop along fixed routes to a few familiar stalls in farmers’ markets. Common fixed-route navigation methods include electromagnetic tracking, magnetic tape tracking, infrared tracking, visual tracking, etc. [10]. Škrabánek et al. [11] designed a magnetic-signal-based positioning system that applies low-cost magnetic tapes as cruising routes, detects the strength of a magnetic signal through a three-axis magnetometer sensor, and controls the robot’s forward movement and turning. Cruz et al. [12] designed a restaurant service robot based on infrared tracking, utilizing a black line as the robot’s cruising route in the restaurant, and controlling the robot’s forward movement and turning through a line-tracking algorithm. Nevertheless, the above methods are not suitable for farmers’ markets with complex environments—the cost of laying out and maintaining electromagnetic tracking tracks is high, infrared tracking tracks are easily contaminated by pedestrians’ soles, and these tracking methods do not have the function of avoiding pedestrians.
An autonomous shopping robot is a highly intelligent guide device consisting of a navigation car and a robotic arm. It can independently search and pick up target products for VI people and bring products back to them. Pauly et al. [13] designed a mobile robot for supermarkets and shopping malls. After a customer places orders for a desired product through a shopping interface, it immediately navigates to the target shelf, then identifies the target product by radio-frequency tags, and finally controls a robotic arm to pick up the product and put it into the shopping basket. Nechyporenko et al. [14] designed a product-picking robot arm for online shopping. They integrated the centroid normal method into a dual-arm robot system with two grippers, providing a practical solution to the robot picking problem in unstructured environments. Experimental tests showed that this method can control the robot arm to accurately pick up various products of different shapes. Although there are some research results on autonomous shopping robots, most of them are aimed at supermarkets rather than farmers’ markets. Additionally, these robots usually need to be equipped with a high-precision, high-load, and high-cost robotic arm, which is difficult for lower-paid VI people to afford.
According to the above analysis, it can be concluded that the mapping and navigation methods for guide robots in farmers’ markets are different from general guide methods. Fully considering the characteristics of the geometric layout and product features of farmers’ markets, this paper proposes a Radio-Frequency–Visual Tag Positioning and Automatic Detection (RFTPAD) algorithm to quickly build a high-precision navigation map. It improves mapping accuracy by combining RFID tag positioning and visual positioning methods to automatically calibrate the robot’s posture and improves efficiency by employing a classification model to automatically detect and record a large amount of product information. Furthermore, this paper proposes an A*-Fixed-Route Navigation (A*-FRN) algorithm to prevent guide robots from frequently taking long detours in crowded aisles. It plans a unique fixed navigation route for each large stall and controls the guide robot to walk along this route every time it navigates. Finally, this paper equips a guide robot with a robotic arm and proposes an Intelligent-Robotic-Arm-Guided Shopping (IRAGS) algorithm that combines deep learning, kinematic solving, and rapidly exploring random tree methods to intelligently control the robotic arm to guide VI people to quickly select fresh products.
2. Experimental Equipment
Figure 1 shows the hardware devices of the market guide robot designed in this paper, which consists of two parts—a navigation car and robotic arm guidance system. Its parameters are as follows: length × width × height is 35.4 × 35.4 × 160 cm3, the weight is 10.5 kg, and the speed is 0.65 m/s.
Figure 1.
Hardware composition of guide robot.
The navigation car is mainly composed of the following components: mobile chassis, depth camera, industrial computer, RFID reader, and wooden handle. Among them, the mobile chassis can move flexibly in narrow shopping aisles and measure a robot’s posture with encoders and gyroscopes; the depth camera can scan the depth information of geometric environments for functions such as building a map and obstacle avoidance; the RFID reader is used to read the information in the RFID tags; and the wooden handle is used to guide VI people to move forward and turn.
The guidance system is mainly composed of the following components: a robotic arm, visual camera, electric push rod, varistor, and electronic scale. Among them, the robotic arm has the ability of guiding VI people or merchants to select product; the retractable electric push rod can point out products; the visual camera is used to locate and identify products, pedestrians, merchants, and VI peoples; and the varistor is used to detect whether the VI people’s finger is holding the robotic arm.
The model number, manufacturer, city, and country of above equipment are shown in Table 1.
Table 1.
The model number, manufacturer, city, and country of the equipment on the guide robot.
The cost of the guide robot is about 21,600 RMB, where the robotic arm is 6000 RMB, mobile chassis is 7000 RMB, depth camera is 1000 RMB, industrial computer is 5000 RMB, RFID reader and tags are 1200 RMB, electric push rod is 1000 RMB, and other parts are 400 RMB.
4. Experiment and Discussion
Figure 10a shows the farmers’ market selected for the experiment, with its detailed address description. The area of the market is about 1600 m2, and the aisle width is 2 m. Figure 10b is a navigation map constructed by the RFTPAD algorithm, where the colored polygons correspond to stalls and other pink areas correspond to shopping aisles. Particularly, the two stalls facing each other in the same channel are 2 m apart, and the signal range of the RFID tag selected in this study is 8 m. Therefore, it can meet the requirements of the RFTPAD algorithm by installing an RFID tag only on one of the two stalls. A total of 56 tags were installed in this farmers’ market.
Figure 10.
Trial site and navigation map: (a) Qixiang Farmers’ Market, address: No. 3399, Qilianshan Road, Baoshan District, Shanghai, China; (b) navigation map. The colored polygons correspond to stalls and other pink areas correspond to aisles. The same color polygons represent that they belong to the same areas.
4.1. Comparative Analysis of Mapping Accuracy and Efficiency with RFTPAD
4.1.1. Accuracy Analysis Between RFTPAD and Cartographer Algorithm
In this trial, eight coordinate points Pm (xm, ym) were randomly selected and input into the robot’s navigation target one by one. The maps constructed by RFTPAD and classical Cartographer algorithms [18,19] were used for navigation, respectively, and the navigation data were recorded to compare their accuracy. Assuming that the absolute navigation error σ1 is the absolute value of the difference between the robot’s theoretical navigation distance Sm and the actual driving distance Sn, then that is σ1 = |Sm − Sn|, and the average error value σ2 is the arithmetic mean of eight absolute errors.
The navigation error data obtained from experimental statistics are shown in Table 4. The third and fourth columns represent the actual driving distance of the robot with the Cartographer algorithm and RFTPAD, respectively, while the fifth and sixth columns represent the absolute navigation error of the robot with the Cartographer algorithm and RFTPAD, respectively. Comparing the fifth and sixth columns, it can be observed that as the navigation distance increases, the error of the map constructed with RFTPAD is consistently smaller than that of those constructed with the Cartographer algorithm. Furthermore, by calculating the data in the table, the average error values of the robot with RFTPAD and the Cartographer algorithm are σ2R = 0.051 m and σ2C = 0.067 m, respectively. (σ2C − σ2R)/σ2C × 100% = 23.9%, which means that compared to the Cartographer algorithm, RFTPAD reduces the map error by 23.9%.
Table 4.
Comparison of navigation errors with RFTPAD and the Cartographer algorithm.
4.1.2. Synchronize Detection of Product Information
The experiment first inputs the image dataset into the product identification model for training. The dataset contains 394 category samples of common products in farmers’ markets, and each sample contains 5000 pictures. Figure 11 shows the accuracy change during model training. Observing the right side of the figure, it can be found that the recognition accuracy of the new model finally stabilized at 95% when the model was trained to convergence.
Figure 11.
Training and validation accuracy of the product identification model. The horizontal axis represents the training epoch, and the vertical axis represents the recognition accuracy.
Figure 12 shows some products detected by RFTPAD while the robot is building a map. The percentage represents the identification accuracy, and the numbers in brackets represent the product coordinates (unit is m). By comparing the product’s actual categories in the figure with the results identified by RFTPAD, it can be seen that the identified results are completely correct, and its accuracy rate reaches more than 95%.
Figure 12.
Products detected by RFTPAD.
Subsequently, the experiment set up the robot to record product information (category and coordinates) with RFTPAD’s automatic detection method and manual counting method, respectively. A total of six trials were conducted, and the area of the trial site was gradually increased. The steps of the manual counting method are as follows: the mapping personnel manually control the guide robot to go to each type of product, manually check its coordinates, and record them as the product coordinates.
Figure 13 is a statistical chart showing the time taken to detect and record all the product information in the farmers’ markets with the above two methods, respectively. The horizontal axis represents the trial site area; the red and blue columns represent RFTPAD and the manual counting method, respectively; and the cyan column and red line represent their difference. By comparing the red and blue columns as a whole, it is obvious that no matter how large the trial site area is, the time spent with the RFTPAD algorithm is less than that with the manual counting method. By calculating the data in the figure, it can be concluded that the average time of RFTPAD is 36.3% less than that of the manual counting method. By observing the red trend line in the figure, it can be found that the difference in the time taken by the two methods gradually increases as the area of the trial site increases, which indicates that the RFTPAD algorithm has an obvious advantage in large farmers’ markets with a wide variety of products.
Figure 13.
The time taken to record product information with RFTPAD and the manual counting method, respectively.
4.2. Fixed-Route Navigation Trial with A*-FRN Algorithm
The colored lines in Figure 14 are the fixed routes planned by A*-FRN on a map. Among them, the green lines represent the fixed routes from the gate to the first large stall in each major area, totaling six routes; the other colored lines represent the fixed routes from the first large stall in each major area to the other large stalls in this area, totaling 22 routes; and there are a total of 28 routes.
Figure 14.
The numbers represent the serial numbers of the stalls in each product area. The fixed routes planned by A*-FRN. The red five-pointed star in the upper right corner represents the starting point of the route, and the colored lines represent the planned fixed route.
4.2.1. Analysis of Robot’s Detour Behavior When Aisle Is Crowded
The trial set up the guide robot to navigate in crowded aisles with the A*-FRN algorithm and the A*-DWA method [23,28,30] (A* and DWA algorithm, a classic navigation method combination), respectively. Figure 15 shows the robot’s driving trajectory with the above different methods. It can be seen from Figure 15a that the robot’s initial trajectory (see yellow line) was about to reach the destination, but then it turned away from the destination and took a long detour (see red line). The detour behavior of the robot with the A*-DWA method shows that it did not follow the optimal route. Compared with Figure 15a, it can be found that the trajectory line of the robot with A*-FRN in Figure 15b approaches the destination very directly, and the overall trajectory length is much shorter. By measuring the trajectory lines, it can be concluded that the driving trajectory of the robot with A*-FRN is about 8 m shorter than that of the robot with A*-DWA.
Figure 15.
The detour behavior of the robot: (a) the robot’s trajectory under A*-DWA control; (b) the robot’s trajectory under A*-FRN control. The black dot represents the robot, the red arrow represents the destination, the blue circles represent the pedestrians detected by the robot, and the colored lines represent the robot’s trajectory.
4.2.2. Analysis of Robot’s Driving Trajectory Length When Aisle Is Crowded
Then, the trial set up the robot to navigate back and forth between the starting point and destination and employed the A*-FRN algorithm and A*-DWA method to control the robot to navigate, respectively, and it recorded its driving trajectory data. Each algorithm was tested 20 times, and the navigation distance was set to increase gradually. Furthermore, the experiment set the robot to run the A*-DWA method when it navigated from the starting point to the destination and run A*-FRN when it returned from the destination to the starting point.
The statistical results of the robot’s driving trajectory length are shown in Figure 16. The blue curve and red curve represent the robot’s driving trajectory length with the A*-DWA method and A*-FRN algorithm, respectively. It can be clearly observed that the blue curve is always above the red curve, which means that when the navigation target is the same, the robot’s overall driving length is shorter when it runs A*-FRN than when it runs A*-DWA. Moreover, it can also be observed that in the 2nd, 5th, 9th, 12th, and 16th trials, the ordinate values corresponding to the blue curve are significantly higher than those of the red curve. Based on the analysis above, we can conclude that the reason for the obvious gap between the two is as follows: when the guide robot runs A*-FRN to navigate, it can intelligently judge the crowded conditions and adopt the strategy of navigating along a fixed global route without giving up the originally planned shortest path. By calculating the data in the figure, it can be concluded that the average driving trajectory length of the robot with the A*-FRN algorithm is 23.3% less than that with the A*-DWA method.
Figure 16.
Robot’s driving trajectory length with different algorithm control, respectively.
4.3. Intelligent Guided and Autonomous Shopping Based on Robotic Arm
4.3.1. Intelligent Selection of Fresh Products with the Assistance of a Robotic Arm
Calibration is a prerequisite for the precise control of the robotic arm. The experiment firstly calibrated the robotic arm and camera. This trial utilized the calibration program of mycobot_280_M5 to set the joint zero position and initialize the potential value of the motor. Then, the trial employed Zhang’s calibration method [38,39] to calibrate the camera mounted on the robotic arm. The camera intrinsic parameters and distortion parameters were calculated as follows:
Furthermore, the experiment performed hand–eye calibration on the robotic arm and camera and solved the transformation matrix between the camera coordinate system and the base coordinate system. The trial firstly utilized a camera mounted on a robotic arm to take multiple images of the same calibration plate in different postures. Subsequently, it calculated the geometric transformation of the robotic arm and camera between different image frames to obtain the transformation matrix. Figure 17 shows the process of the camera collecting calibration plate data at different positions.
Figure 17.
The process of the camera collecting calibration plate data at different positions: (a) camera is on the right of the calibration plate; (b) camera is on the lower right of the calibration plate; (c) camera is directly above the calibration plate.
The calculated hand–eye transformation matrix is as follows:
Subsequently, the experiment tested the IRAGS algorithm’s ability of accurately detecting un-fresh products. Figure 18 shows several target products identified by the IRAGS algorithm after the guide robot navigated to its destination. As can be seen from the figure, IRAGS correctly selected target products from various fruits with large yellow boxes and marked some un-fresher target products with small red boxes. After a careful observation of the products in small boxes, it was found that they have obvious black spots, an abnormal appearance color, and even damaged outer skin, while the target products not marked with small boxes basically have the characteristics of a smooth appearance, a bright color, and a standard outline. Particularly, the data in the figure show that IRAGS can identify non-fresh products with an accuracy of over 96%.

Figure 18.
Un-fresher fruits detected by IRAGS: (a) detected oranges; (b) detected bananas; (c) detected momordica charantia; (d) detected squash. The red boxes represent the un-fresher fruits identified by the IRAGS algorithm, and the numbers above them represent recognition accuracy.
Next, the trial arranged the VI volunteers to select products with the general method (search and touch with fingers) and the assistance of a robotic arm, respectively, and it recorded the time it took to select the fresher products. Although the selected volunteers are completely blind, they have rich experience in selecting commodities. A total of eight trials were conducted, and 2 kg of a certain type of relatively fresh product was selected for each trial. The selected products were apples, bananas, small mangos, dates, potatoes, carrots, green peppers, and cherry tomatoes.
Figure 19 shows the process of the IRAGS controlled robotic arm to pull VI people’s fingers to pick an apple, where Figure 19a shows that IRAGS accurately located and controlled the retractable push rod to touch VI people’s fingers; Figure 19b shows that the robotic arm successfully pulled the VI people’s finger to touch the target product; and Figure 19c shows that the robotic arm guided the VI people’s fingers to place and weigh the selected product on the robot’s built-in electronic scale.
Figure 19.
The process of the robot arm pulling VI people’s fingers to grab an apple: (a) the robotic arm located and touched the VI people’s fingers; (b) the robotic arm pulled the fingers to grab the product; (c) the robotic arm guided the fingers to place and weigh the product.
Figure 20 shows the time statistics of the VI people who selected products with the general method and the assistance of the robotic arm, respectively. It can be clearly observed that the time spent by VI people in selecting products with the assistance of the robotic arm is always much less than that of the general method. Specifically, there is a significant difference in the time it took for VI people to select smaller items (mangos, cherry tomatoes, and jujubes) with the two different methods. Additionally, by calculating the data in the figure, it can be concluded that the average time spent by VI people to select various products with the assistance of the robotic arm is 50% of the time spent with the general method.
Figure 20.
The time statistics of the VI people who selected products with the two methods.
4.3.2. Autonomous Shopping Based on Robotic Arm
The experiment first conducted multiple robot obstacle avoidance tests in aisles with different crowd densities to determine the threshold T. During the experiment, it was found that when the number of pedestrians within 3 m in front of the robot was greater than eight, the robot frequently turned to avoid pedestrians and even frequently changed the global navigation route. Therefore, the experiment set threshold T to eight.
Subsequently, the trial arranged the guide robot to automatically detect the number of pedestrians within 3 m ahead. The two sub-images in Figure 21 show the congestion status in the aisles detected by the IRAGS algorithm. Among them, the number of pedestrians detected in Figure 21a is five, which is less than the threshold T, and IRAGS determined that this aisle is loose; on the contrary, the number of pedestrians detected in Figure 21b is nine, and IRAGS determined that this aisle is crowded. Furthermore, as can be observed from Figure 21, IRAGS correctly detected every pedestrian in the aisles and counted their number. This is consistent with the actuality, which proves that IRAGS can indeed accurately determine the congestion status in an aisle.
Figure 21.
The congestion status in aisles detected by IRAGS. The yellow box represents the pedestrians identified by the IRAGS algorithm, and the number above it represents the recognition accuracy. Num at the top represents the number of identified pedestrians: (a) IRAGS Identifies pedestrians in loose aisle; (b) IRAGS identifies pedestrians in crowded aisle.
When IRAGS determined that the aisle was crowded, it recommended that the VI people give an instruction to the robot to shop autonomously. Figure 22 shows the target merchants identified by IRAGS, with an identification accuracy of 100%. Furthermore, observing Figure 22b, it can be found that IRAGS accurately selected pedestrians and merchants, indicating that IRAGS can indeed accurately distinguish merchants from the crowd without being disturbed by pedestrians.
Figure 22.
Merchants detected by IRAGS: (a) identified merchant without crowd interference; (b) identified merchants from the crowd.
Then, the trial set up the guide robot with the control of IRAGS to autonomously approach the target merchant and remind them to select and weigh the product specified by the robotic arm. Figure 23a shows the IRAGS controlled robotic arm pointing out the relatively fresh target product for the merchant; Figure 23b shows the merchant grabbing the product according to the instruction of the robotic arm.
Figure 23.
IRAGS guided the merchant to select and weigh the product: (a) the robotic arm pointed out the fresher target product; (b) the merchant grabbed the product according to the instruction of the robotic arm.
5. Conclusions
This paper systematically designed a novel guide robot for a farmers’ market, which has the capabilities of quickly building high-precision maps, fixed-path navigation, intelligent robotic arm guidance, and autonomous shopping.
Aimed at the complex dynamic environment of farmers’ markets, this study proposes the RFTPAD algorithm, which innovatively employs RFID-based artificial visual tags to calibrate a robot’s posture and applies deep learning to automatically detect and record a large amount of product information. Multiple trials conducted in a 1600 m2 market demonstrate that the accuracy of the map built with RFTPAD is 23.9% higher than that built with the classical Cartographer algorithm. What is more, the average time spent to detect products with RFTPAD is reduced by 36.3% compared with the manual counting method. Additionally, in terms of navigation, this study proposes the A*-FRN algorithm to control the robot to navigate along a fixed route, successfully preventing the robot from frequently taking long detours in a crowded farmers’ market. Compared with the general navigation method, the average driving trajectory length of the robot with A*-FRN is reduced by 23.3%.
Furthermore, in order to improve the robot’s intelligence, this paper innovatively equips the robot with a flexible robotic arm with the characteristics of light weight and low cost, and it specially designs the IRAGS algorithm to control it. In the robot shopping guide trials, the robotic arm successfully assisted VI people in selecting fresh products. Particularly, its efficiency was doubled compared with selecting products using fingers to search and touch. Additionally, in robot autonomous shopping trials, IRAGS successfully controlled the robotic arm to guide merchants to select and weigh products and automatically brought the remotely paid products back to VI people.
Author Contributions
Conceptualization, Y.C. and M.L.; methodology, Y.C. and M.L.; software, Y.C., J.R. and J.C.; validation, Y.C.; formal analysis, Y.C., J.R. and J.C.; investigation, Y.C., Z.W. and M.L.; resources, Y.C. and M.L.; data curation, Y.C. and W.G.; writing—original draft preparation, Y.C.; writing—review and editing, Y.C., W.G. and M.L.; Y.C., J.R. and J.C.; supervision, Y.C. and M.L.; project administration, Y.C. and M.L.; funding acquisition, Y.C. and M.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Open Foundation of Belt and Road Joint Laboratory on Measurement and Control Technology, Huazhong University of Science and Technology (MCT202306).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data presented in this study are available on request from the author Yunhua Chen.
Acknowledgments
We thank Xiaozhen Chen (A staff member of the Baishou Town government, Yongfu County, Guilin, China) for her help at all stages of the project and her support in donating some core experimental equipment.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Zou, W.; Hua, G.; Zhuang, Y.; Tian, S. Real-time passable area segmentation with consumer RGB-D cameras for the Visually Impaired. IEEE Trans. Instrum. Meas. 2023, 72, 2513011. [Google Scholar] [CrossRef]
- Plikynas, D.; Žvironas, A.; Gudauskis, M.; Budrionis, A.; Daniušis, P.; Sliesoraitytė, I. Research advances of indoor navigation for blind people: A brief review of technological instrumentation. IEEE Instrum. Meas. Mag. 2020, 23, 22–32. [Google Scholar] [CrossRef]
- Wang, J.; Liu, E.; Geng, Y.; Qu, X.; Wang, R. A survey of 17 indoor travel assistance systems for blind and visually impaired people. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 134–148. [Google Scholar] [CrossRef]
- Berka, J.; Balata, J.; Mikovec, Z. Optimizing the number of bluetooth beacons with proximity approach at decision points for intermodal navigation of blind pedestrians. In Proceedings of the 2018 Federated Conference on Computer Science and Information Systems (FedCSIS), Poznan, Poland, 9–12 September 2018. [Google Scholar]
- Kunhoth, J.; Karkar, A.G.; Al-Maadeed, S.; Al-Attiyah, A. Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments. Int. J. Health Geogr. 2019, 18, 29. [Google Scholar] [CrossRef]
- AL-Madani, B.; Orujov, F.; Maskeliūnas, R.; Damaševičius, R.; Venčkauskas, A. Fuzzy logic type-2 based wireless indoor localization system for navigation of visually impaired people in buildings. Sensors 2019, 19, 2114. [Google Scholar] [CrossRef]
- Ahmetovic, D.; Gleason, C.; Ruan, C.; Kitani, K.; Asakawa, C. NavCog: A navigational cognitive assistant for the blind. In Proceedings of the 18th International Conference on Human Computer Interaction with Mobile Devices and Services, New York, NY, USA, 6 September 2016. [Google Scholar]
- Ivanov, R. Indoor navigation system for visually impaired. In Proceedings of the 11th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing on International Conference on Computer Systems and Technologies, Sofia, Bulgaria, 17–18 June 2010. [Google Scholar]
- Gomes, J.P.; Sousa, J.P.; Cunha, C.R.; Morais, E.P. An indoor navigation architecture using variable data sources for blind and visually impaired persons. In Proceedings of the 2018 13th Iberian Conference on Information Systems and Technologies (CISTI), Caceres, Spain, 13–16 June 2018. [Google Scholar]
- Zhang, J.; Yang, X.; Wang, W.; Guan, J.; Ding, L.; Lee, V. Automated guided vehicles and autonomous mobile robots for recognition and tracking in civil engineering. Autom. Constr. 2023, 146, 104699. [Google Scholar] [CrossRef]
- Škrabánek, P.; Vodička, P. Magnetic strips as landmarks for mobile robot navigation. In Proceedings of the 2016 International Conference on Applied Electronics (AE), Pilsen, Czech Republic, 6–7 September 2016. [Google Scholar]
- Cruz, J.D.; Domingo, C.B.; Garcia, R.G. Automated service robot for catering businesses using arduino mega 2560. In Proceedings of the 2023 15th International Conference on Computer and Automation Engineering (ICCAE), Sydney, Australia, 3–5 March 2023. [Google Scholar]
- Pauly, L.; Baiju, M.V.; Viswanathan, P.; Jose, P.; Paul, D.; Sankar, D. CAMbot: Customer assistance mobile manipulator robot. In Proceedings of the 2015 IEEE Bombay Section Symposium (IBSS), Mumbai, India, 10–11 September 2015. [Google Scholar]
- Nechyporenko, N.; Morales, A.; Cervera, E.; Pobil, A.P.D. A practical approach for picking items in an online shopping warehouse. Appl. Sci. 2021, 11, 5805. [Google Scholar] [CrossRef]
- Liu, J. Application of RFID technology in SLAM. South-Cent. Univ. Natl. 2008, 27, 84–87. [Google Scholar]
- DiGiampaolo, E.; Martinelli, F.; Romanelli, F. Exploiting the Orientation of Trilateration UHF RFID Tags in Robot Localization and Mapping. In Proceedings of the 2022 IEEE 12th International Conference on RFID Technology and Applications (RFID-TA), Cagliari, Italy, 12–14 September 2022. [Google Scholar]
- Kroumov, V.; Okuyama, K. Localisation and Position Correction for Mobile Robot using Artificial Visual Landmarks. Int. J. Adv. Mechatron. Syst. 2012, 4, 212–217. [Google Scholar] [CrossRef]
- Dwijotomo, A.; Rahman, M.A.A.; Ariff, M.H.M.; Zamzuri, H.; Azree, W.M.H.W. Cartographer SLAM method for optimization with an adaptive multi-distance scan scheduler. Appl. Sci. 2020, 10, 347. [Google Scholar] [CrossRef]
- Gao, Q.; Jia, H.; Liu, Y.; Tian, X. Design of mobile robot based on cartographer slam algorithm. In Proceedings of the 2019 2nd International Conference on Informatic, Hangzhou, China, 6 May 2019. [Google Scholar]
- Zhang, F.; Zheng, S.; He, Y.; Shao, X. The research on attitude correction method of robot monocular vision positioning system. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016. [Google Scholar]
- Venkateswara, H.; Chakraborty, S.; Panchanathan, S. Deep learning system for domain adaptation in computer vision: Learning transferable feature representations. IEEE Signal Process. Mag. 2017, 34, 117–129. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Gul, F.; Rahiman, W.; Alhady, S.S.N. A comprehensive study for robot navigation techniques. Cogent Eng. 2019, 6, 1632046. [Google Scholar] [CrossRef]
- Peng, J.; Huang, Y.; Luo, G. Robot path planning based on improved A* algorithm. Cybern. Inf. Technol. 2015, 15, 171–180. [Google Scholar] [CrossRef]
- Ayawli, B.B.K.; Chellali, R.; Appiah, A.Y.; Kyeremeh, F. An overview of nature-inspired, conventional, and hybrid methods of autonomous vehicle path planning. J. Adv. Transp. 2018, 2018, 8269698. [Google Scholar] [CrossRef]
- Canadas-Aranega, F.; Moreno, J.C.; Blanco-Claraco, J.L. A PID-based control architecture for mobile robot path planning in greenhouses. IFAC Pap. 2024, 58, 503–508. [Google Scholar] [CrossRef]
- Lin, J.; Zheng, R.; Zhang, Y.; Feng, J.; Li, W.; Luo, K. CFHBA-PID algorithm: Dual-loop PID balancing robot attitude control algorithm based on complementary factor and honey badger algorithm. Sensors 2022, 22, 4492. [Google Scholar] [CrossRef] [PubMed]
- Molinos, E.J.; Llamazares, A.; Ocaña, M. Dynamic window based approaches for avoiding obstacles in moving. Robot. Auton. Syst. 2019, 118, 112–130. [Google Scholar] [CrossRef]
- Guruji, A.K.; Agarwal, H.; Parsediya, D.K. Time-efficient A* algorithm for robot path planning. Procedia Technol. 2016, 23, 144–149. [Google Scholar] [CrossRef]
- Prabhu, S.G.R.; Kyberd, P.; Wetherall, J. Investigating an A-star Algorithm-based Fitness Function for Mobile Robot Evolution. In Proceedings of the 2018 22nd International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 10–12 October 2018. [Google Scholar]
- Singh, A.; Singla, A.; Soni, S. Extension of DH parameter method to hybrid manipulators used in robot-assisted surgery. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2015, 229, 703–712. [Google Scholar] [CrossRef]
- Mohammed, A.; Abdul Ameer, H.R.; Abdul-Zahra, D.S. Design of a Linear Mathematical Model to Control the Manipulator of a Robotic Arm with a Hexagonal Degree of Freedom. In Proceedings of the 2022 3rd Information Technology to Enhance e-Learning and Other Application (IT-ELA), Baghdad, Iraq, 27–28 December 2022. [Google Scholar]
- Khan, A.; Xiangming, C.; Xingxing, Z.; Quan, W.L. Closed form inverse kinematics solution for 6-DOF underwater manipulator. In Proceedings of the 2015 International Conference on Fluid Power and Mechatronics (FPM), Harbin, China, 5–7 August 2015. [Google Scholar]
- Huang, Y.; Jin, C. Path Planning Based on Improved RRT Algorithm. In Proceedings of the 2023 2nd International Symposium on Control Engineering and Robotics (ISCER), Hangzhou, China, 17–19 February 2023. [Google Scholar]
- Konyashov, V.V.; Sergeev, A.S.; Kolganov, O.A.; Fedorov, A.V.; Konyashova, K.A.; Bychenok, V.V. Study of the Applicability of the Method of Optical Triangulation for Evaluation of the Geometric Parameters and Cleanliness of the Surface of Products of Complex Shapes. In Proceedings of the 2023 7th International Conference on Information, Control, and Communication Technologies (ICCT), Astrakhan, Russian Federation, 2–6 October 2023. [Google Scholar]
- Fang, S.; Huang, X.; Chen, H.; Xi, N. Dual-arm robot assembly system for 3C product based on vision guidance. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016. [Google Scholar]
- Min, F.; Wang, Y.; Zhu, S. People counting based on multi-scale region adaptive segmentation and depth neural network. In Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition, Xiamen, China, 26–28 June 2020. [Google Scholar]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Yuan, F.; Xia, Z.; Tang, B.; Yin, Z.; Shao, X.; He, X. Calibration accuracy evaluation method for multi-camera measurement systems. Measurement 2025, 242, 116311. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).