JUNO Project: Deployment and Validation of a Low-Cost Cloud-Based Robotic Platform for Reliable Smart Navigation and Natural Interaction with Humans in an Elderly Institution
Abstract
:1. Introduction
2. Materials and Methods
2.1. Hardware and Software Design
2.1.1. The Robot JUNO
- The height of the robot should be the needed to put the tactile screen in an accessible position for being used by users who could often be in a wheelchair. Thus, the tactile screen stands in a typical TV stand, manually configured in the x, y and z axes.
- The tactile screen includes hidden speakers, and the RGB-D sensor provides microphones, so playing and recording sounds is possible.
- The weight of the robot should be high (around 30 kg), because keeping the balance is very important, since vulnerable users could lean on it, and stability is an essential feature required in this context.
- The robot does not present sharp corners or pieces that stand out of the imaginary cylinder used to ensure collision avoidance. This is very important because elderly people living at the residence can easily suffer injuries, even if they are slightly beaten by the robot due to an unexpected collision.
2.1.2. Functionality of JUNO and Software Architecture Overview
- Being teleoperated by using simple and complex voice commands without the need of using the Internet or smart speakers.
- Changing between teleoperation and autonomous navigation modes by using voice commands, the tactile screen, and a web application capable of interacting with JUNO through the private Intranet.
- Navigating to a specific location in a semantic map, which is superposed over a metric map obtained by using the gmapping ROS package. The first map includes a collection of relevant positions from a topological point of view, which are defined during the installation process of the robotic platform. Such locations can be specified by using natural language using an NLP system designed for directly transforming complex navigation orders into ROS-based code that allows velocity commands to be published on a topic that exposes typical Twist messages defined in the geometry_msgs package.
- Finding a specific elderly user by using face detection and recognition techniques and presenting the cognitive stimulation exercises programmed by the therapist. Once the user finishes the exercise, JUNO should return to the base.
2.2. Mode of Operation
2.3. Description of the Software Components
2.3.1. Autonomous Navigation Modules Existing in the ROS Framework
- ROS gmapping package: It allows the metric map to be generated during the initial installation stage by using the FastSLAM method for performing SLAM (Simultaneous Localization and Mapping) [55]. The obtained map is then saved by using the node map_saver found in the ROS map_server package.
- ROS amcl package: It implements the adaptive Monte Carlo localization approach, which uses a particle filter to track the pose of a robot against a known metric map published on a topic that stores OccupancyGrid messages defined in the ROS nav_msgs package. The amcl package’s authors used several algorithms described in [24] to define the odometry sample motion model and the beam range finder model, among others. This involves the need to adjust a wide number of configuration values, such as the number of maximum and minimum particles, or the parameters that define the laser and odometry models, among others [56]. Such configuration has been made empirically after testing the robot in different situations, both in the laboratory and in the final real environment.
- ROS move_base package: Once the robot is capable of being localized on the global map, it should reach target locations when requested, in a safe manner. The purpose of the move_base package is to provide an implementation of a ROS action that, given a target location in the world, the robot reaches it by following a safe path, autonomously calculated by considering data from corrected odometry (through amcl), sensors and the map of the environment defined as a 2D occupancy grid [56]. The move_base package supports planners adhering to the nav_core::BaseGlobalPlanner and the nav_core::BaseLocalPlanner interfaces defined in the ROS nav_core package. It maintains two superposed maps over the global map, one for the global planner and another for the local planner, defined as cost maps, which take in data from the 2D laser device and build the new maps by considering the information provided by the global metric map and an inflation radius defined by the developer. Such an inflation radius is exposed as a configurable parameter.
- ROS map_server package: After the installation procedure, the metric map calculated by gmapping is stored. Another map is obtained offline as an occupancy grid, where each cell represents a number assigned to a specific area (defined as a semantic concept, such as a room, a corridor, or another specific region of the free space). Each defined area is additionally represented by a location. The locations are connected by using a graph that defines a topological map superposed over the semantic and metric maps. The metric map is published by using the node map_server implemented in the map_server package.
2.3.2. Techniques Used in Natural Interaction Software Modules
- Face recognition: The FaceNet system is used for this purpose since it “directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity” [59]. FaceNet has been trained by using hundreds of millions of images. Consequently, the obtained accuracy is more than 99% for face recognition.
- Speech to Text (STT): The Vosk speech recognition toolkit [60,61] has been selected for STT, since it supports a wide number of languages and dialects; it works offline even on lightweight devices; it allows the possibility of reconfiguring the vocabulary for improving the accuracy; and it supports speaker identification if needed, although this feature is not used in this work yet. The chosen language for this work is Spanish and the selected model is a lightweight wideband one appropriated for Android devices and even Raspberry Pi computers.
- Text to Speech (TTS): A combination of the library pyttx3 [62] and talkey [63] (an interface library with multi-language and multi-engine support) has been used to give JUNO the capacity to speak certain simple Spanish phrases. In addition, the web application, which allows elderly users to complete the program of cognitive stimulation, uses its own TTS system. JUNO’s tactile screen includes a pair of speakers to reproduce any kind of sound.
- Natural Language Processing (NLP): The library spaCy (a free solution written in Python) [64] is used to extract relevant information from the output generated by the software module that uses the Vosk STT. Such information is a collection of commands specifically related to the process of navigation and for changing the operation mode of the robot by using Natural Language.
- Attention degree analysis: Estimating the degree of attention when a user is carrying out a specific cognitive exercise is implemented by using the MediaPipe solution FaceMesh, which is capable of estimating 468 3D face landmarks in real time, even in lightweight devices [65,66]. It uses an ML-based method to infer the 3D facial surface from RGB images. It consists of a pipeline that combines two real-time deep neural network models working together: a detector acting on the full image and calculating face locations and a 3D face landmark model that operates on the selected locations of the image for predicting the 3D surface by using a regression technique. Once the face mesh is obtained, interesting landmarks are selected to obtain a measurement of the attention degree.
2.3.3. Integration of the Software Modules
- Local planner: A specific ROS node for explicitly tracking the path generated by the global planner has been implemented. The Pure Pursuit [67] technique is used to follow such a path by adapting the behavior of the robot according to different situations. Thus, if the robot is not aligned with the position to be tracked (located at a given lookahead L), it rotates on itself until it aligns the robot and the target position (a threshold of 15 degrees is used for deciding that JUNO is aligned). The value of L and the velocity are adaptive, since if the robot is aligned to the target point, linear velocity is incremented, but when it rotates, such velocity decreases. A secondary lookahead L2, defined as an empirically configurable distance (0.75 m in this work), is used to search for positions in front of the robot when it is aligned with the path. If the position at L2 belongs to a different area in the semantic map in comparison to the current position of the robot, the lookahead L decreases together with the linear velocity. This improves the behavior of the robot when it traverses narrow spaces, such as doors.
- Supervisor node: This ROS node has been specifically implemented to centralize all the data sent by the rest of the nodes. According to such data, it is possible to change the mode of operation and, thus, send the appropriate velocity command to the robot.
- Low-level JUNO controller: A ROS node for sending velocity commands and estimating the odometry from encoder readings has been implemented. Such a node sends and receives a set of data frames (package of bytes) to and from the firmware executed in a Raspberry Pi Pico, where the low-level PID controller runs. Such firmware has also been fully designed by authors with the purpose of reducing the economic cost of the traction system while maintaining a suitable performance for odometry calculation by using the “dead reckoning” technique.
- Face detection and recognition node: This ROS node integrates the techniques for face detection and recognition described in Section 2.3.2. When the robot reaches a goal position considered a location where an elderly person is waiting to carry out his/her programmed cognitive stimulation exercises, this node is activated, and the robot rotates on itself until the face of the elderly user is detected and recognized. If the process fails, the robot sends a message to the smartphone application mentioned above.
- STT and NLP nodes: A ROS node uses the STT system described in Section 2.3.2 to acquire sound from a microphone and translate sounds into text. Such text is then analyzed by the NLP system, and only if a sentence with sense has been perceived it is sent to a specific topic that stores String messages defined in the std_msgs ROS package.
- Attention degree analyzer: It is implemented as a ROS node that publishes, through the corresponding “topic”, the value of the direction of the face. Such direction is calculated using the three-dimensional coordinates of the set of representative points generated by the MediaPipe FaceMesh solution. Specifically, once such coordinates are obtained, the algorithm analyzes the variation of the depth value (z coordinate) of the eyes and mouth, and it determines where the user is looking.
- Smartphone application: It is implemented as a web application that runs under any compatible browser that supports the JavaScript ROS bridge suite. It connects to the ROS ecosystem through a WebSocket; consequently, it is requested that the smartphone is connected to the same intranet as the robot. This application allows the person in charge of the robot to perform an emergency stop, to teleoperate it, if needed, and he/she can receive messages from the robot in special situations when an elderly person is not detected in the target location, or it is impossible for the robot to reach a navigation goal.
- User interface application: As the robot is equipped with a tactile screen, a typical user interface is presented when the robot is working. Such an interface provides options for the residence staff, making the use of JUNO easy. In particular, professionals can select elderly users and send the robot to different locations by simply selecting a place and pushing a button.
- Google Cloud backend and frontend: Although most data are locally stored by the robot itself and it does not need to use external resources for navigating or doing natural interaction, a backend and frontend have been developed by using the following Cloud services provided by Google: Google App Engine, Datastore and EndPoints Framework. The front end is a website that allows installers to install the robot in a residence for the first time. The backend makes storing data in the Datastore easy. The information acquired from the users and the environment is named JUNO context and it includes personal data, DL models and metrics, and semantic and topological maps, among others. This facilitates the installation of a robot unit in one or another institution by simply downloading a new JUNO context and deleting all the local information previously stored for other contexts.
3. Results
3.1. Validation of the Navigation System
3.2. Validation of the Components Aimed at Natural Human–Robot Interaction
3.2.1. STT System Validation
3.2.2. Face Recognition System Validation
- Results for the DNN.
- Results for KNN.
- Results for SVM.
- Results for XGBoost.
3.2.3. Validation of the NLP Module
3.3. Validation of the Whole System
4. Discussion
- Although the ROS framework helps developers implement and combine different robotic software packages in a decoupled and distributed manner, it is necessary to create an Intranet where the ROS master runs under a specific IP. In this work, this is not a problem because the system has been deployed in the residence by developers. However, if the installation procedure is expected to be automatized, it is necessary that all the hardware components connect to the same Intranet automatically. The authors are currently working on this issue, and a possible solution is to share such IP through the Google Cloud application, which allows the context of the robot to be saved.
- The semantic and topological maps are currently defined manually, but it would be desirable that installers define the set of topological points while they are teleoperating the system during the installation step by using spoken commands. The authors are also working on tunning an exploration algorithm that allows the robot to build the first map without the need to be teleoperated.
- In the context of an elderly institution where many elderly users are physically impaired and are often helped by sticks, walkers or wheelchairs, and they are susceptible to suffering falls and serious health problems if they are injured, the local planner should be redesigned by considering these issues. In fact, the authors are also currently developing a new local planner capable of allowing the robot to navigate, considering the social protocol and evaluating the dynamic objects that appear on its path. To do this, the RGB-D sensor will be used together with the implemented face detector, and from the face, the system will detect the rest of the body of humans moving in the near path of the robot. According to the degree of frailty detected by analyzing the objects surrounding the person (sticks, walkers or similar), the local map will be updated. The robot can only go near a person if there is enough distance to ensure that it is not possible to cause injury to the user.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Aging and Health. World Health Organization Website. Available online: https://www.who.int/news-room/fact-sheets/detail/ageing-and-health (accessed on 30 November 2022).
- Sayin Kasar, K.; Karaman, E. Life in lockdown: Social isolation, loneliness and quality of life in the elderly during the COVID-19 pandemic: A scoping review. Geriatr. Nurs. 2021, 42, 1222–1229. [Google Scholar] [CrossRef] [PubMed]
- Roth, D.L.; Fredman, L.; Haley, W.E. Informal Caregiving and Its Impact on Health: A Reappraisal From Population-Based Studies. Gerontologist 2015, 55, 309–319. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- AAL Programme. Ageing Well in the Digital World Homepage. Available online: http://www.aal-europe.eu/about/ (accessed on 4 December 2022).
- Engage. Enabling Social Robots Homepage. Available online: https://engage-aal-project.eu/ (accessed on 4 December 2022).
- ReMember-Me Homepage. Available online: http://www.aal-europe.eu/projects/remember-me/ (accessed on 4 December 2022).
- AgeWell Homepage. Available online: http://www.aal-europe.eu/projects/agewell/ (accessed on 4 December 2022).
- eWare Homepage. Available online: http://www.aal-europe.eu/projects/eware/ (accessed on 4 December 2022).
- CAMI Homepage. Available online: http://www.aal-europe.eu/projects/cami/ (accessed on 4 December 2022).
- ASSAM Homepage. Available online: http://www.aal-europe.eu/projects/assam/ (accessed on 4 December 2022).
- ExCITE Homepage. Available online: http://www.aal-europe.eu/projects/excite/ (accessed on 4 December 2022).
- ALIAS Homepage. Available online: http://www.aal-europe.eu/projects/alias/ (accessed on 4 December 2022).
- DOMEO Homepage. Available online: http://www.aal-europe.eu/projects/domeo/ (accessed on 4 December 2022).
- Chifu, V.R.; Pop, C.B.; Demjen, D.; Socaci, R.; Todea, D.; Antal, M.; Cioara, T.; Anghel, I.; Antal, C. Identifying and Monitoring the Daily Routine of Seniors Living at Home. Sensors 2022, 22, 992. [Google Scholar] [CrossRef] [PubMed]
- Chifu, V.R.; Pop, C.B.; Rancea, A.M.; Morar, A.; Cioara, T.; Antal, M.; Anghel, I. Deep Learning, Mining, and Collaborative Clustering to Identify Flexible Daily Activities Patterns. Sensors 2022, 22, 4803. [Google Scholar] [CrossRef]
- Anghel, I.; Cioara, T.; Moldovan, D.; Antal, M.; Pop, C.D.; Salomie, I.; Pop, C.B.; Chifu, V.R. Smart Environments and Social Robots for Age-Friendly Integrated Care Services. Int. J. Environ. Res. Public Health 2020, 17, 3801. [Google Scholar] [CrossRef]
- Stara, V.; Santini, S.; Kropf, J.; D’Amen, B. Digital Health Coaching Programs Among Older Employees in Transition to Retirement: Systematic Literature Review. J. Med. Internet Res. 2020, 22, e17809. [Google Scholar] [CrossRef]
- Miller, D.P. Assistive robotics: An overview. In Assistive Technology and Artificial Intelligence, LNCS; Mittal, V.O., Yanco, H.A., Aronis, J., Simpson, R., Eds.; Springer: Heidelberg, Germany, 2006; Volume 1458, pp. 126–136. [Google Scholar] [CrossRef]
- Brose, S.W.; Weber, D.J.; Salatin, B.A.; Grindle, G.G.; Wang, H.; Vazquez, J.J.; Cooper, R.A. The Role of Assistive Robotics in the Lives of Persons with Disability. Am. J. Phys. Med. Rehabil. 2010, 89, 509–521. [Google Scholar] [CrossRef]
- Shishehgar, M.; Kerr, D.; Blake, J. A systematic review of research into how robotic technology can help older people. Smart Health 2018, 7, 1–18. [Google Scholar] [CrossRef]
- Gil, H. The elderly and the digital inclusion: A brief reference to the initiatives of the European Union and Portugal. MOJ Gerontol. Geriatr. 2019, 4, 213–221. [Google Scholar] [CrossRef]
- Beaunoyer, E.; Dupéré, S.; Guitton, M.J. COVID-19 and digital inequalities: Reciprocal impacts and mitigation strategies. Comput. Hum. Behav. 2020, 111, 106424. [Google Scholar] [CrossRef]
- Këpuska, V.; Bohouta, G. Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home). In Proceedings of the IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2018; pp. 99–103. [Google Scholar] [CrossRef]
- Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics, Intelligent Robotics and Autonomous Agents Series; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
- Tzafestas, S.G. Mobile Robot Control and Navigation: A Global Overview. J. Intell. Robot. Syst. 2018, 91, 35–58. [Google Scholar] [CrossRef]
- Ravankar, A.; Ravankar, A.A.; Kobayashi, Y.; Hoshino, Y.; Peng, C.-C. Path Smoothing Techniques in Robot Navigation: State-of-the-Art, Current and Future Challenges. Sensors 2018, 18, 3170. [Google Scholar] [CrossRef] [Green Version]
- Chandra, R. Precise localization for achieving next-generation autonomous navigation: State-of-the-art, taxonomy and future prospects. Comput. Commun. 2020, 160, 351–374. [Google Scholar] [CrossRef]
- Crespo, J.; Castillo, J.C.; Mozos, O.M.; Barber, R. Semantic Information for Robot Navigation: A Survey. Appl. Sci. 2020, 10, 497. [Google Scholar] [CrossRef] [Green Version]
- Yasuda, Y.D.V.; Martins, L.E.G.; Cappabianco, F.A.M. Autonomous Visual Naviga-tion for Mobile Robots: A Systematic Literature Review. ACM Comput. Surv. 2020, 53, 1–34. [Google Scholar] [CrossRef] [Green Version]
- Zhu, K.; Zhang, T. Deep reinforcement learning based mobile robot navigation: A review. IEEE Tsinghua Sci. Technol. 2021, 26, 674–691. [Google Scholar] [CrossRef]
- Cheng, C.; Duan, S.; He, H.; Li, X.; Chen, Y. A Generalized Robot Navigation Analysis Platform (RoNAP) with Visual Results Using Multiple Navigation Algorithms. Sensors 2022, 22, 9036. [Google Scholar] [CrossRef]
- Ji, J.; Khajepour, A.; Melek, W.W.; Huang, Y. Path Planning and Tracking for Vehicle Collision Avoidance Based on Model Predictive Control With Multiconstraints. IEEE Trans. Veh. Technol. 2017, 66, 952–964. [Google Scholar] [CrossRef]
- Cui, S.; Chen, Y.; Li, X. A Robust and Efficient UAV Path Planning Approach for Tracking Agile Targets in Complex Environments. Machines 2022, 10, 931. [Google Scholar] [CrossRef]
- Chen, Y.; Cheng, C.; Zhang, Y.; Li, X.; Sun, L. A Neural Network-Based Navigation Approach for Autonomous Mobile Robot Systems. Appl. Sci. 2022, 12, 7796. [Google Scholar] [CrossRef]
- Xiao, W.; Yuan, L.; He, L.; Ran, T.; Zhang, J.; Cui, J. Multigoal Visual Navigation With Collision Avoidance via Deep Reinforcement Learning. IEEE Trans. Instrum. Meas. 2022, 71, 1–9. [Google Scholar] [CrossRef]
- Sharifi, M.; Chen, X.; Pretty, C.; Clucas, D.; Cabon-Lunel, E. Modelling and simulation of a non-holonomic omnidirectional mobile robot for offline programming and system performance analysis. Simul. Model. Pract. Theory 2018, 87, 155–169. [Google Scholar] [CrossRef]
- Aricò, P.; Sciaraffa, N.; Babiloni, F. Brain-Computer Interfaces: Toward a Daily Life Employment. Brain Sci. 2020, 10, 157. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lledó, L.D.; Badesa, F.J.; Almonacid, M.; Cano-Izquierdo, J.M.; Sabater-Navarro, J.M.; Fernández, E.; Garcia-Aracil, N. Supervised and Dynamic Neuro-Fuzzy Systems to Classify Physiological Responses in Robot-Assisted Neurorehabilitation. PLoS ONE 2015, 10, e0127777. [Google Scholar] [CrossRef] [PubMed]
- Mane, R.; Chouhan, T.; Guan, C. BCI for stroke rehabilitation: Motor and beyond. J. Neural Eng. 2020, 17, 041001. [Google Scholar] [CrossRef]
- Bamdad, M.; Zarshenas, H.; Auais, M.A. Application of BCI systems in neurorehabilitation: A scoping review. Disabil. Rehabil. Assist. Technol. 2015, 10, 355–364. [Google Scholar] [CrossRef]
- Bockbrader, M.A.; Francisco, G.; Lee, R.; Olson, J.; Solinsky, R.; Boninger, M.L. Brain Computer Interfaces in Rehabilitation Medicine. PM&R 2018, 10, S233–S243. [Google Scholar]
- Robinson, N.; Mane, R.; Chouhan, T.; Guan, C. Emerging trends in BCI-robotics for motor control and rehabilitation. Curr. Opin. Biomed. Eng. 2021, 20, 100354. [Google Scholar] [CrossRef]
- Vozzi, A.; Ronca, V.; Aricò, P.; Borghini, G.; Sciaraffa, N.; Cherubino, P.; Trettel, A.; Babiloni, F.; Di Flumeri, G. The Sample Size Matters: To What Extent the Participant Reduction Affects the Outcomes of a Neuroscientific Research. A Case-Study in Neuromarketing Field. Sensors 2021, 21, 6088. [Google Scholar] [CrossRef]
- Della Mea, V. What is e-health (2): The death of telemedicine? J. Med. Internet Res. 2001, 3, E22. [Google Scholar] [CrossRef]
- Kruse, C.S.; Karem, P.; Shifflett, K.; Vegi, L.; Ravi, K.; Brooks, M. Evaluating barriers to adopting telemedicine worldwide: A systematic review. J. Telemed. Telecare 2018, 24, 4–12. [Google Scholar] [CrossRef] [Green Version]
- From Alzhup to Zebra: Telemedicine Is Everywhere in 2016. Available online: https://www.fiware.org/2016/01/15/from-alzhup-to-zebra-telemedicine-is-everywhere-in-2016/ (accessed on 4 December 2022).
- Sequeira, H.; Hot, P.; Silvert, L.; Delplanque, S. Electrical autonomic correlates of emotion. Int. J. Psychophysiol. 2009, 71, 50–56. [Google Scholar] [CrossRef]
- Borghini, G.; Aricò, P.; Di Flumeri, G.; Sciaraffa, N.; Herrero, M.T.; Bezerianos, A.; Colosimo, A.; Babiloni, F. A new perspective for the training assessment: Machine-learning-based neurometric for augmented user’s evaluation. Front. Neurosci. 2017, 11, 325. [Google Scholar] [CrossRef] [Green Version]
- Long, P.; Liu, W.; Pan, J. Deep-Learned Collision Avoidance Policy for Distributed Multi-Agent Navigation. IEEE Robot. Autom. Lett. 2017, 2, 656–663. [Google Scholar] [CrossRef] [Green Version]
- Kha, Q.H.; Tran, T.O.; Nguyen, T.T.; Nguyen, V.N.; Than, K.; Le, N.Q.K. An interpretable deep learning model for classifying adaptor protein complexes from sequence information. Methods 2022, 207, 90–96. [Google Scholar] [CrossRef]
- Kha, Q.H.; Ho, Q.T.; Le, N.Q.K. Identifying SNARE Proteins Using an Alignment-Free Method Based on Multiscan Convolutional Neural Network and PSSM Profiles. J. Chem. Inf. Model. 2022, 62, 4820–4826. [Google Scholar] [CrossRef]
- Pavón-Pulido, N.; López-Riquelme, J.A.; Ferruz-Melero, J.; Vega-Rodríguez, M.Á.; Barrios-León, A.J. A service robot for monitoring elderly people in the context of Ambient Assisted Living. J. Ambient. Intell. Smart Environ. 2014, 6, 595–621. [Google Scholar] [CrossRef]
- Pavón-Pulido, N.; López-Riquelme, J.A.; Pinuaga-Cascales, J.J.; Ferruz-Melero, J.; Dos Santos, R.M. Cybi: A Smart Companion Robot for Elderly People: Improving Teleoperation and Telepresence Skills by Combining Cloud Computing Technologies and Fuzzy Logic. In Proceedings of the 2015 IEEE International Conference on Autonomous Robot Systems and Competitions, Vila Real, Portugal, 8–10 April 2015. [Google Scholar] [CrossRef]
- Bautista-Salinas, D.; González, J.R.; Méndez, I.; Mozos, O.M. Monitoring and Prediction of Mood in Elderly People during Daily Life Activities. In Proceedings of the 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019. [Google Scholar]
- Abdelrasoul, Y.; Saman, A.B.S.H.; Sebastian, P. A quantitative study of tuning ROS gmapping parameters and their effect on performing indoor 2D SLAM. In Proceedings of the 2016 2nd IEEE International Symposium on Robotics and Manufacturing Automation (ROMA), Ipoh, Malaysia, 25–27 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Zheng, K. ROS Navigation Tuning Guide, Robot Operating System (ROS). The Complete Reference; Springer: Cham, Switzerland, 2017; pp. 197–226. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer Vision—ECCV 2016; Lecture Notes in Computer Science; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; Volume 9905. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar] [CrossRef] [Green Version]
- Trabelsi, A.; Warichet, S.; Aajaoun, Y.; Soussilane, S. Evaluation of the efficiency of state-of-the-art Speech Recognition engines. Procedia Comput. Sci. 2022, 207, 2242–2252. [Google Scholar] [CrossRef]
- Vosk Official Website. Available online: https://alphacephei.com/vosk/ (accessed on 27 November 2022).
- Pyttx3 Website. Available online: https://pypi.org/project/pyttsx3/ (accessed on 27 November 2022).
- Talkey Website Documentation. Available online: https://pythonhosted.org/talkey/ (accessed on 27 November 2022).
- spaCY Website Documentation. Available online: https://spacy.io/usage/processing-pipelines (accessed on 27 November 2022).
- Kartynnik, Y.; Ablavatski, A.; Grishchenko, I.; Grundman, M. Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs. arXiv 2019, arXiv:1907.06724. [Google Scholar] [CrossRef]
- Face Mesh, MediaPipe Solution Website. Available online: https://google.github.io/mediapipe/solutions/face_mesh.html (accessed on 27 November 2022).
- Samuel, M.; Hussein, M.; Binti, M. A Review of some Pure-Pursuit based Path Tracking Techniques for Control of Autonomous Vehicle. Int. J. Comput. Appl. 2016, 135, 35–38. [Google Scholar] [CrossRef]
- Pavón, N.; Ferruz, J.; Ollero, A. Describing the environment using semantic labelled polylines from 2D laser scanned raw data: Application to autonomous navigation. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 3257–3262. [Google Scholar] [CrossRef]
- Speech Recognition on MediaSpeech Website. Available online: https://paperswithcode.com/sota/speech-recognition-on-mediaspeech (accessed on 30 November 2022).
- Sokolova, M.; Japkowicz, N.; Szpakowicz, S. Beyond Accuracy, F-Score and ROC: A Family of Discriminant Measures for Performance Evaluation. In AI 2006: Advances in Artificial Intelligence; Lecture Notes in Computer Science; Sattar, A., Kang, B., Eds.; Springer: Berlin, Heidelberg, 2006; Volume 4304. [Google Scholar] [CrossRef]
Hardware Component | Cost (€) |
---|---|
Touch Screen 13.3” | 178.00 |
Orbbec Astra RGB-D Camera | 216.59 |
RPLIDAR S2 360° Laser Scanner | 399.00 |
MSI Cubi N JSL-033BEU Intel Celeron N4500 computer | 210.00 |
USB 3.0 4 ports HUB | 25.00 |
Two Raspberry Pi Pico microcontrollers | 19.00 |
Lithium-ion 24 V 12 AH battery | 135.00 |
Small consumable electronic equipment | 50.00 |
JUNO differential traction system | 266.00 |
Small consumable mechanical equipment | 50.00 |
JUNO robot structure | 1000 |
Total | 2548.59 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pavón-Pulido, N.; Blasco-García, J.D.; López-Riquelme, J.A.; Feliu-Batlle, J.; Oterino-Bono, R.; Herrero, M.T. JUNO Project: Deployment and Validation of a Low-Cost Cloud-Based Robotic Platform for Reliable Smart Navigation and Natural Interaction with Humans in an Elderly Institution. Sensors 2023, 23, 483. https://doi.org/10.3390/s23010483
Pavón-Pulido N, Blasco-García JD, López-Riquelme JA, Feliu-Batlle J, Oterino-Bono R, Herrero MT. JUNO Project: Deployment and Validation of a Low-Cost Cloud-Based Robotic Platform for Reliable Smart Navigation and Natural Interaction with Humans in an Elderly Institution. Sensors. 2023; 23(1):483. https://doi.org/10.3390/s23010483
Chicago/Turabian StylePavón-Pulido, Nieves, Jesús Damián Blasco-García, Juan Antonio López-Riquelme, Jorge Feliu-Batlle, Roberto Oterino-Bono, and María Trinidad Herrero. 2023. "JUNO Project: Deployment and Validation of a Low-Cost Cloud-Based Robotic Platform for Reliable Smart Navigation and Natural Interaction with Humans in an Elderly Institution" Sensors 23, no. 1: 483. https://doi.org/10.3390/s23010483
APA StylePavón-Pulido, N., Blasco-García, J. D., López-Riquelme, J. A., Feliu-Batlle, J., Oterino-Bono, R., & Herrero, M. T. (2023). JUNO Project: Deployment and Validation of a Low-Cost Cloud-Based Robotic Platform for Reliable Smart Navigation and Natural Interaction with Humans in an Elderly Institution. Sensors, 23(1), 483. https://doi.org/10.3390/s23010483