Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms
Abstract
:1. Introduction
2. Related Work
2.1. ROS Autonomous Mobile Robots
2.2. Learning Algorithms
2.3. Investigation Tendencies
2.4. Hardware Comparison
- Only a few articles mention an actual implementation of the trained algorithms on embedded systems [41].
- There is a lack of research about training the CNN model on an embedded system, as it simplifies the integration, reduces costs, and adds portability. NVIDIA Jetson Nano is an option for centralizing the AMR processing.
3. Materials and Methods
3.1. Materials
- Jetson NanoThis embedded system from the manufacturer NVIDIA works as the central computer that performs all the important calculations, deep learning training, and remote communication through the operating system ROS. Jetson Nano has a GPU with 128 CUDA Cores, enabling the robot to run real-time object recognition through a deep learning model. This cognitive independence is fundamental to reaching a correct level of autonomy for the robot.
- ESP32This is a development kit from Espressif Systems. It has a dual-core, 32-bit microprocessor and performs better than other low-cost microcontrollers. The embedded system establishes serial communication with this device as it continuously reads the two encoders and other sensors alongside the PWM output control of the two direct current (DC) motors.Another important characteristic of ESP32 is its connectivity simplicity, as it has Bluetooth and Wi-Fi communications, which is particularly useful for the remote control of the robot. However, the communication to Jetson Nano is serial through a USB port of the embedded system.
- MotorsThe motors in the JetBot development kit, also from Waveshare, are regular 12 VDC brushed motors coupled with a gearbox with a gear ratio of 30:1 that provides enough torque for the robot to move smoothly in most indoor environments.The robot’s locomotion is differential, requiring two controllable wheels and two non-controllable caster wheels. This design provides better stability to the robot than a single caster wheel. However, the friction added could cause difficulty in moving in certain situations.
- SensorsEach DC motor has a 2-channel encoder mounted directly on the shaft. Given the properties of the gearbox described earlier, the encoder is used to read the position and velocity of the robot inferred from the local movement of the left and right wheels. Each encoder provides 330 pulses per revolution. Since the motor has two separate channels to read the pulses, it is possible to track the direction of the local movement of the two wheels.However, the sensor that is more appropriate for estimating the robot’s position is RPLidar from the manufacturer Slamtec. It is a rotative device that emits an IR light beam read by a sensor to estimate the distance between the sensor and its surroundings.
- CameraThe camera in the prototype kit is a IMX219-160 CSI manufactured by Waveshare for Jetson Nano or Raspberry Pi. It has 8 MP of resolution and an angle of view of .
- BatteriesAs briefly explained, the battery module was modified to increase the robot’s autonomy level. The original battery module of the JetBot ROS AI kit is equipped with three 18,650 rechargeable 3.7 V, 9900 mAh batteries in series that power the whole hardware, including the embedded system, the microcontroller, and the motion system. The batteries are from the manufacturer GTL Everfire.Working with a single battery could cause potential difficulties, mainly because it limits the autonomy of the robot and its ability to stay powered for long periods. Based on the information of the manufacturer, any mishandling of the battery can cause a fault in the embedded system. And due to the interest in working with the robot’s autonomy, it is important to grant the safety of the modules, as shown in Figure 3.The uninterruptible power supply (UPS) module provides energy to Jetson Nano. It includes four 18,650 batteries and their respective protection and monitor circuits. The communication between the UPS battery module and Jetson Nano is through I2C protocol.The second battery is an 11.1 V, 2200 mAh LiPo battery produced by Floureon. It is suitable for different autonomous robots like drones [47]. This battery helps to isolate the control system from the rest of the hardware. It grants safety, it is a reliable solution to possible communication difficulties, and it elongates the working time of the embedded system, which constantly communicates remotely.
3.2. Methods
3.2.1. Robot’s Cinematic Model
- is the robot’s velocity in the global coordinate system (, );
- is the robot’s velocity in the local coordinate system (, ).
- is the linear velocity in the robot’s local -axis;
- is the robot’s angular velocity around the Z-axis.
- r is the radius of the left and right wheels;
- l is the distance between the left and right wheels;
- is the angular velocity of the right wheel;
- is the angular velocity of the left wheel.
- is the linear velocity in the global -axis;
- is the linear velocity in the global -axis;
- is the angular velocity around the global -axis.
3.2.2. Path Planning Algorithm
3.2.3. Velocity Controller
3.2.4. Differential AMR’s Base Control
3.2.5. Environment Mapping
3.2.6. Artificial Vision
4. Results
4.1. Hardware Implementation
4.2. Software
4.3. Obstacle Detection
5. Discussion
6. Conclusions
Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AMR | Autonomous Mobile Robot; |
ANN | Artificial Neural Network; |
CNN | Convolutional Neural Network; |
CPU | Central Processing Unit; |
DC | Direct Current; |
DL | Deep Learning; |
GPU | Graphics Processing Unit; |
IMU | Inertial Measurement Unit; |
LiDAR | Light Detection and Ranging; |
ML | Machine Learning; |
PID | Proportional Integral Derivative; |
RL | Reinforcement Learning; |
ROS | Robot Operating System; |
SLAM | Simultaneous Localization and Mapping; |
UPS | Uninterruptible Power Supply; |
YOLOv3 | You Only Look Once version 3; |
I2C | Inter-Integrated Circuit; |
NASA | National Aeronautics and Space Administration. |
References
- Das, S.; Mishra, S.K. A Machine Learning approach for collision avoidance and path planning of mobile robot under dense and cluttered environments. Comput. Electr. Eng. 2022, 103, 108376. [Google Scholar] [CrossRef]
- Cui, J.; Nie, G. Motion Route Planning and Obstacle Avoidance Method for Mobile Robot Based on Deep Learning. J. Electr. Comput. Eng. 2022, 2022, 5739765. [Google Scholar] [CrossRef]
- Kheirandish, M.; Yazdi, E.A.; Mohammadi, H.; Mohammadi, M. A fault-tolerant sensor fusion in mobile robots using multiple model Kalman filters. Robot. Auton. Syst. 2023, 161, 104343. [Google Scholar] [CrossRef]
- Ishihara, Y.; Takahashi, M. Empirical study of future image prediction for image-based mobile robot navigation. Robot. Auton. Syst. 2022, 150, 104018. [Google Scholar] [CrossRef]
- Injarapu, A.S.H.H.; Gawre, S.K. A Survey of Autonomous Mobile Robot Path Planning Approaches. In Proceedings of the International Conference on Recent Innovations in Signal Processing and Embedded Systems (RISE), Bhopal, India, 27–29 October 2017; pp. 624–628. [Google Scholar] [CrossRef]
- Zafar, M.N.; Mohanta, J.C. Methodology for Path Planning and Optimization of Mobile Robots: A Review. Procedia Comput. Sci. 2018, 133, 141–152. [Google Scholar] [CrossRef]
- Keirsey, D.; Koch, E.; McKisson, J.; Meystel, A.; Mitchell, J. Algorithm of navigation for a mobile robot. In Proceedings of the 1984 IEEE International Conference on Robotics and Automation, Atlanta, GA, USA, 13–15 March 1984; Volume 1, pp. 574–583. [Google Scholar] [CrossRef]
- Nilsson, N.J. A mobile automation: An application of artificial intelligence techniques. In Proceedings of the 1st International Joint Conference on Artificial Intelligence (IJCAI-69), Washington, DC, USA, 7–9 May 1969; pp. 509–520. [Google Scholar]
- Miller, J.A. Autonomous Guidance and Control of a Roving Robot; Guidance and Control Section; Jet Propulsion Laboratory Pasadena: Pasadena, CA, USA, 1977. [Google Scholar]
- Auh, E.; Kim, J.; Joo, Y.; Park, J.; Lee, G.; Oh, I.; Pico, N.; Moon, H. Unloading sequence planning for autonomous robotic container-unloading system using A-star search algorithm. Eng. Sci. Technol. Int. J. 2024, 50, 101610. [Google Scholar] [CrossRef]
- Yang, L.; Bi, J.; Yuan, H. Dynamic Path Planning for Mobile Robots with Deep Reinforcement Learning. IFAC-PapersOnLine 2022, 55, 19–24. [Google Scholar] [CrossRef]
- Zhang, L.; Cai, Z.; Yan, Y.; Yang, C.; Hu, Y. Multi-agent policy learning-based path planning for autonomous mobile robots. Eng. Appl. Artif. Intell. 2024, 129, 107631. [Google Scholar] [CrossRef]
- Kiran, B.R.; Sobh, I.; Talpaert, V.; Mannion, P.; Sallab, A.A.A.; Yogamani, S.; Pérez, P. Deep Reinforcement Learning for Autonomous Driving: A Survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 4909–4926. [Google Scholar] [CrossRef]
- Wang, Y.; Lu, C.; Wu, P.; Zhang, X. Path planning for unmanned surface vehicle based on improved Q-Learning algorithm. Ocean Eng. 2024, 292, 116510. [Google Scholar] [CrossRef]
- Zhou, Q.; Lian, Y.; Wu, J.; Zhu, M.; Wang, H.; Cao, J. An optimized Q-Learning algorithm for mobile robot local path planning. Knowl.-Based Syst. 2024, 286, 111400. [Google Scholar] [CrossRef]
- Qin, H.; Shao, S.; Wang, T.; Yu, X.; Jiang, Y.; Cao, Z. Review of Autonomous Path Planning Algorithms for Mobile Robots. Drones 2023, 7, 211. [Google Scholar] [CrossRef]
- Singh, R.; Ren, J.; Lin, X. A Review of Deep Reinforcement Learning Algorithms for Mobile Robot Path Planning. Vehicles 2023, 5, 1423–1451. [Google Scholar] [CrossRef]
- Kou, X.; Liu, S.; Cheng, K.; Qian, Y. Development of a YOLO-V3-based model for detecting defects on steel strip surface. Measurement 2021, 182, 109454. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- de Carvalho, K.B.; Batista, H.B.; Oliveira, I.L.D.; Brandao, A.S. A 3D Q-Learning Algorithm for Offline UAV Path Planning with Priority Shifting Rewards. In Proceedings of the 2022 19th Latin American Robotics Symposium, 2022 14th Brazilian Symposium on Robotics and 2022 13th Workshop on Robotics in Education, LARS-SBR-WRE 2022, São Bernardo do Campo, Brazil, 18–19 October 2022; pp. 169–174. [Google Scholar] [CrossRef]
- Zheng, X.; Wu, Y.; Zhang, L.; Tang, M.; Zhu, F. Priority-aware path planning and user scheduling for UAV-mounted MEC networks: A deep reinforcement learning approach. Phys. Commun. 2024, 62, 102234. [Google Scholar] [CrossRef]
- Albonico, M.; Dordevic, M.; Hamer, E.; Malavolta, I. Software engineering research on the Robot Operating System: A systematic mapping study. J. Syst. Softw. 2023, 197, 111574. [Google Scholar] [CrossRef]
- Macenski, S.; Foote, T.; Gerkey, B.; Lalancette, C.; Woodall, W. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, eabm6074. [Google Scholar] [CrossRef]
- Piyapunsutti, S.; Guzman, E.L.D.; Chaichaowarat, R. Navigating Mobile Manipulator Robot for Restaurant Application Using Open-Source Software. In Proceedings of the 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO), Koh Samui, Thailand, 4–9 December 2023. [Google Scholar] [CrossRef]
- Huang, B.; Xie, J.; Yan, J. Inspection Robot Navigation Based on Improved TD3 Algorithm. Sensors 2024, 24, 2525. [Google Scholar] [CrossRef]
- Estefo, P.; Simmonds, J.; Robbes, R.; Fabry, J. The Robot Operating System: Package reuse and community dynamics. J. Syst. Softw. 2019, 151, 226–242. [Google Scholar] [CrossRef]
- Lamini, C.; Fathi, Y.; Benhlima, S. H-MAS architecture and reinforcement learning method for autonomous robot path planning. In Proceedings of the 2017 Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 17–19 April 2017; pp. 1–7. [Google Scholar] [CrossRef]
- Ruan, X.; Lin, C.; Huang, J.; Li, Y. Obstacle avoidance navigation method for robot based on deep reinforcement learning. In Proceedings of the 2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 4–6 March 2022; Volume 6, pp. 1633–1637. [Google Scholar] [CrossRef]
- Han, H.; Wang, J.; Kuang, L.; Han, X.; Xue, H. Improved Robot Path Planning Method Based on Deep Reinforcement Learning. Sensors 2023, 23, 5622. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.; Liang, L. SLP-Improved DDPG Path-Planning Algorithm for Mobile Robot in Large-Scale Dynamic Environment. Sensors 2023, 23, 3521. [Google Scholar] [CrossRef] [PubMed]
- del R. Millán, J. Reinforcement learning of goal-directed obstacle-avoiding reaction strategies in an autonomous mobile robot. Robot. Auton. Syst. 1995, 15, 275–299. [Google Scholar] [CrossRef]
- VOSviewer. Available online: https://www.vosviewer.com/ (accessed on 1 June 2023).
- Kastner, L.; Bhuiyan, T.; Le, T.A.; Treis, E.; Cox, J.; Meinardus, B.; Kmiecik, J.; Carstens, R.; Pichel, D.; Fatloun, B.; et al. Arena-Bench: A Benchmarking Suite for Obstacle Avoidance Approaches in Highly Dynamic Environments. IEEE Robot. Autom. Lett. 2022, 7, 9477–9484. [Google Scholar] [CrossRef]
- Wang, B.; Liu, Z.; Li, Q.; Prorok, A. Mobile robot path planning in dynamic environments through globally guided reinforcement learning. IEEE Robot. Autom. Lett. 2020, 5, 6932–6939. [Google Scholar] [CrossRef]
- Park, M.; Ladosz, P.; Oh, H. Source Term Estimation Using Deep Reinforcement Learning with Gaussian Mixture Model Feature Extraction for Mobile Sensors. IEEE Robot. Autom. Lett. 2022, 7, 8323–8330. [Google Scholar] [CrossRef]
- Zheng, Z.; Cao, C.; Pan, J. A Hierarchical Approach for Mobile Robot Exploration in Pedestrian Crowd. IEEE Robot. Autom. Lett. 2022, 7, 175–182. [Google Scholar] [CrossRef]
- Chen, Y.; Rosolia, U.; Ubellacker, W.; Csomay-Shanklin, N.; Ames, A. Interactive Multi-Modal Motion Planning with Branch Model Predictive Control. IEEE Robot. Autom. Lett. 2022, 7, 5365–5372. [Google Scholar] [CrossRef]
- Yin, Y.; Chen, Z.; Liu, G.; Guo, J. A Mapless Local Path Planning Approach Using Deep Reinforcement Learning Framework. Sensors 2023, 23, 2036. [Google Scholar] [CrossRef]
- Park, M.; Lee, S.; Hong, J.; Kwon, N. Deep Deterministic Policy Gradient-Based Autonomous Driving for Mobile Robots in Sparse Reward Environments. Sensors 2022, 22, 9574. [Google Scholar] [CrossRef]
- Kozjek, D.; Malus, A.; Vrabič, R. Reinforcement-learning-based route generation for heavy-traffic autonomous mobile robot systems. Sensors 2021, 21, 4809. [Google Scholar] [CrossRef] [PubMed]
- Pei, M.; An, H.; Liu, B.; Wang, C. An Improved Dyna-Q Algorithm for Mobile Robot Path Planning in Unknown Dynamic Environment. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 4415–4425. [Google Scholar] [CrossRef]
- Sivaranjani, A.; Vinod, B. Artificial Potential Field Incorporated Deep-Q-Network Algorithm for Mobile Robot Path Prediction. Intell. Autom. Soft Comput. 2023, 35, 1135–1150. [Google Scholar] [CrossRef]
- Wang, X.; Liu, J.; Nugent, C.; Cleland, I.; Xu, Y. Mobile agent path planning under uncertain environment using reinforcement learning and probabilistic model checking. Knowl.-Based Syst. 2023, 264, 110355. [Google Scholar] [CrossRef]
- Yeom, K. Collision avoidance for a car-like mobile robots using deep reinforcement learning. Int. J. Emerg. Technol. Adv. Eng. 2021, 11, 22–30. [Google Scholar] [CrossRef] [PubMed]
- Hu, J.; Niu, H.; Carrasco, J.; Lennox, B.; Arvin, F. Voronoi-Based Multi-Robot Autonomous Exploration in Unknown Environments via Deep Reinforcement Learning. IEEE Trans. Veh. Technol. 2020, 69, 14413–14423. [Google Scholar] [CrossRef]
- Xiang, J.; Li, Q.; Dong, X.; Ren, Z. Continuous Control with Deep Reinforcement Learning for Mobile Robot Navigation. In Proceedings of the 2019 Chinese Automation Congress, CAC, Hangzhou, China, 22–24 November 2019; pp. 1501–1506. [Google Scholar] [CrossRef]
- Vohra, D.; Garg, P.; Ghosh, S. Power Management of Drones. Lect. Notes Civ. Eng. 2023, 304, 555–569. [Google Scholar] [CrossRef] [PubMed]
- Scaramuzza, D.; Siegwart, R.; Nourbakhsh, I.R. Introduction to Autonomous Mobile Robots, 2nd ed.; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
- He, S.; Song, T.; Wang, P.; Ding, C.; Wu, X. An Enhanced Adaptive Monte Carlo Localization for Service Robots in Dynamic and Featureless Environments. J. Intell. Robot. Syst. 2023, 108, 6. [Google Scholar] [CrossRef]
- Automatic Obstacle Avoiding—Waveshare Wiki. Available online: https://www.waveshare.com/wiki/Automatic_Obstacle_Avoiding (accessed on 15 January 2023).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
- Gao, H.; Zhou, R.; Tomizuka, M.; Xu, Z. Online Learning Based Mobile Robot Controller Adaptation for Slip Reduction. IFAC-PapersOnLine 2023, 56, 1301–1306. [Google Scholar] [CrossRef]
Reference | Algorithm/Model | Hardware | Results | Year |
---|---|---|---|---|
This work | ResNet18 and YOLOv3 running on ROS Melodic | NVIDIA Jetson NANO, MPCore processor Quad-core, 128-core Maxwell 1098 MHz, 4 GB DDR4 RAM | The vision system prevents collisions with dynamic obstacles, reaching an accuracy of 98.5%, a recall of 97%, and an F1-score of 98.5% | 2024 |
Wang et al. [43] | QEA-Learning | Intel(R) Core(TM) i5-7200 CPU, with 2.50 GHz and 2.71 GHz | The QEA-Learning algorithm reduces the probability of failure by assigning less weight to the reward function | 2023 |
Yang et al. [11] | DRL, Soft Actor-Critic (SAC), Pro- ximal Policy Optimization (PPO) | Intel Xeron(R) CPU E5-2650 v4 at 2.20 GHz | SAC performed better than PPO by finding a higher maximum reward and better exploration–exploitation balance | 2022 |
Ruan et al. [28] | DRL, Dobule Q-Network (D3QN) | NVIDIA GTX 3080 GPU, Robot Operating System (ROS) | The loss function stabilizes after 1000 training trials | 2022 |
De Carvalho et al. [20] | Q-Learning | 16 GB of RAM, Intel Core i7-7700 | Processing time under 2 min for 20 × 20 × 5 cell maps | 2022 |
Yeom [44] | DRL | Raspberry Pi 3, Quad Core 1.2 GHz, Intel 4-core i5 7500 CPU, 3.80 GHz, 32 GB RAM | DRL reduced path length by approximately 20% compared to the traditional Dynamic Window Approach (DWA) | 2021 |
Hu et al. [45] | Deep Deterministic Policy Gradient (DDPG), Prioritized Experience Replay (PER) | Raspberry Pi 3, NVIDIA GTX 1080 GPU, Intel Core i9 CPU with 2.9 GHz | The algorithm performed 20 successful missions and required 40 min of training time | 2020 |
Xiang et al. [46] | DRL, Soft Actor-Critic (SAC) | NVIDIA GeForce RTX 2070 GPU, 3.70 GHz eight-core AMD Ryzen 7 2700X, 16 GB RAM, ROS | The model was compared against traditional gmapping navigation, reaching competitive results after hours of training | 2019 |
Parameter | Value |
---|---|
Optimizer | SDG |
Learning rate | 0.001 |
Momentum | 0.9 |
Epochs | 10 |
Batch size | 8 |
Class Name | Precision | 1-Precision | Recall | 1-Recall | F1-Score |
---|---|---|---|---|---|
Blocked | 0.9691 | 0.0309 | 1.0000 | 0.0000 | 0.9843 |
Free | 1.0000 | 0.0000 | 0.9717 | 0.0283 | 0.9856 |
Accuracy | 0.9850 | ||||
Misclassification Rate | 0.0150 | ||||
Macro-F1 | 0.9850 | ||||
Weighted-F1 | 0.9850 |
Class Name | Precision | 1-Precision | Recall | 1-Recall | F1-Score |
---|---|---|---|---|---|
Blocked | 1 | 0 | 0.92 | 0.0799 | 0.9583 |
Free | 0.9259 | 0.074 | 1 | 0 | 0.9615 |
Accuracy | 0.96 | ||||
Misclassification Rate | 0.04 | ||||
Macro-F1 | 0.9599 | ||||
Weighted-F1 | 0.9599 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Galarza-Falfan, J.; García-Guerrero, E.E.; Aguirre-Castro, O.A.; López-Bonilla, O.R.; Tamayo-Pérez, U.J.; Cárdenas-Valdez, J.R.; Hernández-Mejía, C.; Borrego-Dominguez, S.; Inzunza-Gonzalez, E. Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms. Technologies 2024, 12, 82. https://doi.org/10.3390/technologies12060082
Galarza-Falfan J, García-Guerrero EE, Aguirre-Castro OA, López-Bonilla OR, Tamayo-Pérez UJ, Cárdenas-Valdez JR, Hernández-Mejía C, Borrego-Dominguez S, Inzunza-Gonzalez E. Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms. Technologies. 2024; 12(6):82. https://doi.org/10.3390/technologies12060082
Chicago/Turabian StyleGalarza-Falfan, Jorge, Enrique Efrén García-Guerrero, Oscar Adrian Aguirre-Castro, Oscar Roberto López-Bonilla, Ulises Jesús Tamayo-Pérez, José Ricardo Cárdenas-Valdez, Carlos Hernández-Mejía, Susana Borrego-Dominguez, and Everardo Inzunza-Gonzalez. 2024. "Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms" Technologies 12, no. 6: 82. https://doi.org/10.3390/technologies12060082
APA StyleGalarza-Falfan, J., García-Guerrero, E. E., Aguirre-Castro, O. A., López-Bonilla, O. R., Tamayo-Pérez, U. J., Cárdenas-Valdez, J. R., Hernández-Mejía, C., Borrego-Dominguez, S., & Inzunza-Gonzalez, E. (2024). Path Planning for Autonomous Mobile Robot Using Intelligent Algorithms. Technologies, 12(6), 82. https://doi.org/10.3390/technologies12060082