The Future of Intelligent Healthcare: A Systematic Analysis and Discussion on the Integration and Impact of Robots Using Large Language Models for Healthcare
Abstract
:1. Introduction
2. Large Language Models (LLMs) for Healthcare
Model Name | Parameter Size | Context Length | Type of Training Data | Foundational Model | Papers That Use the Model |
---|---|---|---|---|---|
MED-PaLM-2 [42] | 540 B | 8196 | MedQA, MedMCQA, HealthSearchQA, LiveQA, and MedicationQA | PaLM 2 [53] | |
DRG-LLaMA [43] | 7 B, 13 B, 70 B | 4096 | 236,192 MIMIC-IV discharge summaries | LLaMa 2 [54] | |
GPT2-BioPT [44] | 124 M, 770 M, | 1024 | PorTuguese-2 with biomedical literature [44] | GPT-2 [55] | |
Clinical-T5 [45] | 220 M, 770 M, 3 B, 11 B | Variable, memory-constrained | Approximately 2 million textual notes from MIMIC-III | T5 [48] | |
BiomedBERT [41] | 110 M, 340 M | 512 | BREATHE containing research articles and abstracts from different sources (BMJ, arXiv, medRxiv, bioRxiv, CORD-19, Springer Nature, NCBI, JAMA, and BioASQ) | BERT [52] | |
BERT [52] | 110 M, 340 M | 512 | BookCorpus (800 M words) and English Wikipedia (2500 M words) | NA | |
T5 [48] | 60 M, 220 M, 770 M, 3 B, 11 B | Variable Length | 750 GB of Colossal Clean Crawled Corpus (C4) | NA | [31,74,75] |
GPT-3 [46] | 175 B | 4096 | Not provided | NA | [76,77,78,79,80] |
GPT-3.5 [46] | 175 B | 4096 | Not provided | NA | [31,32,81,82,83,84] |
GPT-4 [47] | 1.8 T | 128,000 | Not provided | NA | [74,85,86] |
PaLM [50] | 8 B, 62 B, 540 B | 2048 | Social media conversations (multilingual): 50%; filtered webpages (multilingual): 27%; books (English): 13%; GitHub (code): 5%; Wikipedia (multilingual): 4%; news (English): 1% | NA | [87] |
PaLM 2 [53] | Not available | Not available | Not available | NA | |
Stanford Alpaca [51] | 7 B | 4096 | A mix of publicly available online data and synthetic data generated by GPT-3 | NA | [88] |
DialogLED [49] | 41 M | 4096 | Books, English Wikipedia, real news, and stories | NA | [31] |
Prompting Method | Prompt Description | Example Prompt | Advantages | Disadvantages | Papers That Use Method |
---|---|---|---|---|---|
Direct questioning | Direct questioning about a topic of interest. | ‘What are the primary symptoms of Type 2 diabetes?’ | Straightforward, clear, and easy to understand. Effective for factual inquiries. | May not elicit detailed or nuanced responses; limited to the user’s knowledge to ask the right questions. | |
Chain of thought | A problem is presented, followed by a step-by-step reasoning process to solve it. | ‘To determine the Body Mass Index (BMI), first divide the weight in kilograms by the height in meters squared.’ | Breaks down complex processes into understandable steps; useful for teaching and clarification. This approach can help the model in complex problem-solving tasks. | Can be time-consuming; requires accurate initial logic to be effective. | [75,86,87] |
Zero-shot | Providing little to no context (zero-shot) to guide the LLM on how to respond or what format to follow. | ‘Describe the process of cellular respiration in human cells.’ | Tests the model’s ability to respond based on its pre-trained knowledge. | Responses may lack context or specificity; dependent on the model’s existing knowledge. | [31,74,75,81,84,85] |
Few-shot learning | Giving a few examples (few-shot) to guide the LLM on how to respond or what format to follow. | ‘[Example 1: ‘An apple is a fruit that can help with digestion’.] [Example 2: ‘A treadmill is a device used for physical exercise’.] What is an ultrasound?’ | Provides context through examples; improves the accuracy and relevance of responses. | The quality of the response depends on the quality of the examples provided. | [31,32,76] |
EmotionPrompt [89] | Incorporating emotional cues to prompts and/or asking the LLM to emphasize emotion stimulus in its output. | ‘It’s crucial for my family’s well-being. Can you provide advice on maintaining a balanced diet for heart health?’ | Result in more engaging and less generic LLM outputs. | Overexaggeration in emotional stimuli and indication to excessive gestures. | [82,83,86] |
Multi-modal prompting | Incorporating more than just text in the prompts, like images or data, especially in models that can process multi-modal inputs. This is useful for tasks that require interpretation across different types of information. | ‘Here is an MRI image of a knee. Can you explain the common injuries indicated by this type of scan?’ | Incorporates different data types for a more holistic understanding; useful for diagnostics and treatment planning. | Requires LLM models capable of processing and interpreting multiple data modes effectively. | [77,85] |
Task-oriented prompting | Combination of previous prompting methods with the addition of primitive robot actions and feedback from the robot and its operating environment, as well user request. | ‘Your role is to generate robotic plans in a X embodied robot capable of <primitive actions: moveTO (location, grasp (Object), scan()> The current state of the robot is ${state}, generate robotic plans by generating pythonic code with the use of primitive action functions. The user is requestioned ${userRequest).’ | Enables LLMs for integration with robotic systems. Enables LLMs to be used to generate robotic plans taking into consideration the abilities of a robot and the conditions in the environment. | Requires expert programming to integrate into an autonomous system. Limited by context length of LLM models. | [32,76,78,79,80,84,86,87,88] |
3. Human–Robot Interaction (HRI) and Communication
3.1. Single-Modal Communication
3.2. Multi-Modal Communication
3.3. Summary and Outlook
4. Semantic Reasoning
Summary and Outlook
5. Planning
Summary and Outlook
6. Ethical Considerations of Robotics Using LLMs in Healthcare
6.1. Accountability
6.2. Humanizing Care
6.3. Privacy
Summary and Ethical Outlook
7. Open Challenges and Future Research Directions in Healthcare Robots Using LLMs
7.1. Open Research Challenges
7.2. Future Research Directions
8. Design of Potential Healthcare Applications of LLM-Embedded Healthcare Robots
8.1. Design 1: Multi-Modal Communication for a Socially Assistive Robot
Prompt 1 | Determine the intent of the user request, does the user seek entertainment? If yes return “entertain(user_request)” or if the user is seeking historical events and conversation return “converse(user_request) |
Prompt 2 | You are a part of the Entertain module within a socially assistive robot. Your role is to access and provide entertainment based on the preferences and requests of the user. Given the textual transcription of the user’s spoken request, use the following sequence of function calls to guide your response. Example 1: User Request: ‘I want to watch a documentary about space’. API Call: searchYouTube(‘documentary about space’) Function Calls: 1. video_id = fetchVideoID(‘documentary about space’) 2. video_path = saveVideo(video_id) 3. playMedia(video_path) Example 2: User Request: ‘Play some classical music’. API Call: searchYouTube(‘classical music playlist’) Function Calls: 1. video_id = fetchVideoID(‘classical music playlist’) 2. video_path = saveVideo(video_id) 3. playMedia(video_path) Based on the user’s current request, follow these steps to retrieve the video ID, save it, and then play the media. Use the appropriate API calls to search YouTube and handle the responses effectively. |
Prompt 3 | you are embedded within a socially assistive robot to provide Reminiscence/Rehabilitation Interactive Therapy to older adults with dementia. You will also be provided with images of the scene, if you detect the user to have negative emotions, start reminiscence therapy based on this information [patient information] |
Prompt 4 | Our goal is to integrate non-verbal communication into the text-based script that the socially assistive robot will use to respond to older adults with dementia. The robot’s script should include atomic actions to perform specific gestures, body movements and facial expressions, improving its interactions and providing a more comforting presence. These are the atomic actions: “ Yes: nods head downwards; Explain: moves both hands in front of robot and then apart from each other; Confident: robot tilts hip backwards and stands with a wide stance;” you need to take this <script> presentation and match/re-write it to include the appropriate gestures and body movements embedded within the text. Here is an example: r“^start(animations/Stand/Gestures/Explain) Welcome to a fascinating journey into the realm of robotic learning!” r”Just like humans, robots can learn and evolve.^stop(animations/Stand/Gestures/Confident) |
8.2. Design 2: Semantic Reasoning and Planning
Prompt 1. | Are there any ambiguities in the retrieved data with respect to the [surgical operation]?, your response should address the ambiguities or any missing tools or procedural steps relevant to the ongoing surgical operation [surgical operation], only include the surgical procedure and the required tools, do not provide explanations for how the ambiguities are identified. |
Prompt 2 | Analyze the current surgical procedure details. For each step, identify the required surgical tool and its location as observed in the accompanying images. Generate a response formatted as a three-part entry for each step, delineated by colons. The format should be: procedure step number, name of the surgical tool, and the tool’s location. For example, ‘Step 1: Bone Saw: Tool Cart.’ |
Prompt 3 | your role is to manage and facilitate the retrieval of surgical tools through generating behavior trees written in XML format. You are designed to interpret surgeon commands and feedback. Functional Capabilities: graspTool(tool): Grasps a specified surgical tool necessary for the procedure. releaseTool(tool): Releases the currently held tool back into the tool tray. navigateTo(location): Moves the robot’s arms to a specified location within the surgical field. reportFailure(): Logs an error and signals for human assistance if a task cannot be completed. Example: <BehaviorTree> <Sequence name = “Tool Retrieval for Surgery Preparation”> <Action function = “navigateTo(‘Tool Cart’)”/> <Action function = “graspTool(‘Scalpel’)” onFailure = “reportFailure”/> <Action function = “navigateTo(‘Surgical Table’)” onFailure = “reportFailure”/> <Action function = “releaseTool(‘Scalpel’)” onFailure = “reportFailure”/> <Action function = “navigateTo(‘Tool Cart’)” onFailure = “reportFailure”/> <Action function = “graspTool(‘Scissors’)” onFailure = “reportFailure”/> <Action function = “navigateTo(‘Surgical Table’)” onFailure = “reportFailure”/> <Action function = “releaseTool(‘Scissors’)” onFailure = “reportFailure”/> <Action function = “navigateTo(‘Tool Cart’)” onFailure = “reportFailure”/> <Action function = “graspTool(‘Suture Kit’)” onFailure = “reportFailure”/> <Action function = “navigateTo(‘Surgical Table’)” onFailure = “reportFailure”/> <Action function = “releaseTool(‘Suture Kit’)” onFailure = “reportFailure”/> </Sequence> </BehaviorTree> <SubTree> <Action name = “reportFailure”> <Log message = “STUCK: Assistance required.”/> <Signal function = “requestHelp”/> </Action> </SubTree> |
Prompt 4 | You are tasked with two functions: (1) Documenting the surgical procedure as a list surrounded by *(documentation)*. Here is an example: “*(1. Start of procedure, 2. Beginning on the rise of nose, …)*”, and (2) if there is failure in handover of surgical tools to a surgeon, identify the issues in the tool handover process based on surgeon feedback, and generate recommendations for PaLM-E to consider in generating the new plan. The suggestion to PaLM-E should be surrounded by &(suggestion)&, and should include the step(s) of the plan which have resulted in errors and the associated target object(s) and location(s) based on the corresponding images of the OR and feedback from a surgeon |
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- World Health Organization. Ageing and Health. Available online: https://www.who.int/news-room/fact-sheets/detail/ageing-and-health (accessed on 3 January 2024).
- Hornstein, J. Chronic Diseases in America|CDC. Available online: https://www.cdc.gov/chronicdisease/resources/infographic/chronic-diseases.htm (accessed on 19 January 2024).
- Hacker, K.A. COVID-19 and Chronic Disease: The Impact Now and in the Future. Prev. Chronic. Dis. 2021, 18, E62. [Google Scholar] [CrossRef] [PubMed]
- Express Entry Targeted Occupations: How Many Healthcare Workers Does Canada Need?|CIC News. Available online: https://www.cicnews.com/2023/10/express-entry-targeted-occupations-how-many-healthcare-workers-does-canada-need-1040056.html (accessed on 19 January 2024).
- Fact Sheet: Strengthening the Health Care Workforce|AHA. Available online: https://www.aha.org/fact-sheets/2021-05-26-fact-sheet-strengthening-health-care-workforce (accessed on 25 June 2024).
- Tulane University. Big Data in Health Care and Patient Outcomes. Available online: https://publichealth.tulane.edu/blog/big-data-in-healthcare/ (accessed on 19 January 2024).
- Gibson, K. The Impact of Health Informatics on Patient Outcomes. Available online: https://graduate.northeastern.edu/resources/impact-of-healthcare-informatics-on-patient-outcomes/ (accessed on 19 January 2024).
- Northeastern University Graduate Programs. Using Data Analytics to Predict Outcomes in Healthcare. Available online: https://journal.ahima.org/page/using-data-analytics-to-predict-outcomes-in-healthcare (accessed on 19 January 2024).
- Yu, P.; Xu, H.; Hu, X.; Deng, C. Leveraging Generative AI and Large Language Models: A Comprehensive Roadmap for Healthcare Integration. Healthcare 2023, 11, 2776. [Google Scholar] [CrossRef] [PubMed]
- Shen, D.; Wu, G.; Suk, H.-I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [PubMed]
- Park, D.J.; Park, M.W.; Lee, H.; Kim, Y.-J.; Kim, Y.; Park, Y.H. Development of Machine Learning Model for Diagnostic Disease Prediction Based on Laboratory Tests. Sci. Rep. 2021, 11, 7567. [Google Scholar] [CrossRef] [PubMed]
- Webster, P. Six Ways Large Language Models Are Changing Healthcare. Nat. Med. 2023, 29, 2969–2971. [Google Scholar] [CrossRef] [PubMed]
- Benary, M.; Wang, X.D.; Schmidt, M.; Soll, D.; Hilfenhaus, G.; Nassir, M.; Sigler, C.; Knödler, M.; Keller, U.; Beule, D.; et al. Leveraging Large Language Models for Decision Support in Personalized Oncology. JAMA Netw. Open 2023, 6, e2343689. [Google Scholar] [CrossRef] [PubMed]
- UC Davis Health Minimally Invasive and Robotic Surgery|Comprehensive Surgical Services|UC Davis Health. Available online: https://health.ucdavis.edu/surgicalservices/minimally_invasive_surgery.html (accessed on 25 January 2024).
- Robotic Surgery: Robot-Assisted Surgery, Advantages, Disadvantages. Available online: https://my.clevelandclinic.org/health/treatments/22178-robotic-surgery (accessed on 19 January 2024).
- Sivakanthan, S.; Candiotti, J.L.; Sundaram, A.S.; Duvall, J.A.; Sergeant, J.J.G.; Cooper, R.; Satpute, S.; Turner, R.L.; Cooper, R.A. Mini-Review: Robotic Wheelchair Taxonomy and Readiness. Neurosci. Lett. 2022, 772, 136482. [Google Scholar] [CrossRef] [PubMed]
- Fanciullacci, C.; McKinney, Z.; Monaco, V.; Milandri, G.; Davalli, A.; Sacchetti, R.; Laffranchi, M.; De Michieli, L.; Baldoni, A.; Mazzoni, A.; et al. Survey of Transfemoral Amputee Experience and Priorities for the User-Centered Design of Powered Robotic Transfemoral Prostheses. J. Neuroeng. Rehabil. 2021, 18, 168. [Google Scholar] [CrossRef] [PubMed]
- MIT-Manus Robot Aids Physical Therapy of Stroke Victims. Available online: https://news.mit.edu/2000/manus-0607 (accessed on 20 January 2024).
- Maciejasz, P.; Eschweiler, J.; Gerlach-Hahn, K.; Jansen-Troy, A.; Leonhardt, S. A Survey on Robotic Devices for Upper Limb Rehabilitation. J. Neuroeng. Rehabil. 2014, 11, 3. [Google Scholar] [CrossRef] [PubMed]
- Teng, R.; Ding, Y.; See, K.C. Use of Robots in Critical Care: Systematic Review. J. Med. Internet Res. 2022, 24, e33380. [Google Scholar] [CrossRef] [PubMed]
- Abdullahi, U.; Muhammad, B.; Masari, A.; Bugaje, A. A Remote-Operated Humanoid Robot Based Patient Monitoring System. IRE J. 2023, 7, 17–22. [Google Scholar]
- Gonzalez, C. Service Robots Used for Medical Care and Deliveries—ASME. Available online: https://www.asme.org/topics-resources/content/are-service-bots-the-new-future-post-covid-19 (accessed on 3 January 2024).
- Sarker, S.; Jamal, L.; Ahmed, S.F.; Irtisam, N. Robotics and Artificial Intelligence in Healthcare during COVID-19 Pandemic: A Systematic Review. Robot. Auton. Syst. 2021, 146, 103902. [Google Scholar] [CrossRef] [PubMed]
- How Robots Became Essential Workers in the COVID-19 Response—IEEE Spectrum. Available online: https://spectrum.ieee.org/how-robots-became-essential-workers-in-the-covid19-response (accessed on 20 January 2024).
- The Clever Use of Robots during COVID-19—EHL Insights|Business. Available online: https://hospitalityinsights.ehl.edu/robots-during-covid-19 (accessed on 20 January 2024).
- Getson, C.; Nejat, G. The Adoption of Socially Assistive Robots for Long-Term Care: During COVID-19 and in a Post-Pandemic Society. Healthc. Manag. Forum 2022, 35, 301–309. [Google Scholar] [CrossRef] [PubMed]
- Henschel, A.; Laban, G.; Cross, E.S. What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You. Curr. Robot. Rep. 2021, 2, 9–19. [Google Scholar] [CrossRef] [PubMed]
- Kim, J.; Kim, S.; Kim, S.; Lee, E.; Heo, Y.; Hwang, C.-Y.; Choi, Y.-Y.; Kong, H.-J.; Ryu, H.; Lee, H. Companion Robots for Older Adults: Rodgers’ Evolutionary Concept Analysis Approach. Intell. Serv. Robot. 2021, 14, 729–739. [Google Scholar] [CrossRef] [PubMed]
- Denecke, K.; Baudoin, C.R. A Review of Artificial Intelligence and Robotics in Transformed Health Ecosystems. Front. Med. 2022, 9, 795957. [Google Scholar] [CrossRef]
- Sevilla-Salcedo, J.; Fernádez-Rodicio, E.; Martín-Galván, L.; Castro-González, Á.; Castillo, J.C.; Salichs, M.A. Using Large Language Models to Shape Social Robots’ Speech. Int. J. Interact. Multimed. Artif. Intell. 2023, 8, 6. [Google Scholar] [CrossRef]
- Addlesee, A.; Sieińska, W.; Gunson, N.; Garcia, D.H.; Dondrup, C.; Lemon, O. Multi-Party Goal Tracking with LLMs: Comparing Pre-Training, Fine-Tuning, and Prompt Engineering 2023. In Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Prague, Czechia, 11–15 September 2023. [Google Scholar]
- Pandya, A. ChatGPT-Enabled daVinci Surgical Robot Prototype: Advancements and Limitations. Robotics 2023, 12, 97. [Google Scholar] [CrossRef]
- Elgedawy, R.; Srinivasan, S.; Danciu, I. Dynamic Q&A of Clinical Documents with Large Language Models. arXiv 2024, arXiv:2401.10733. [Google Scholar]
- Hu, M.; Pan, S.; Li, Y.; Yang, X. Advancing Medical Imaging with Language Models: A journey from n-grams to chatgpt. arXiv 2023, arXiv:2304.04920. [Google Scholar]
- A Comprehensive Overview of Large Language Models. Available online: https://ar5iv.labs.arxiv.org/html/2307.06435 (accessed on 8 March 2024).
- Lin, T.; Wang, Y.; Liu, X.; Qiu, X. A Survey of Transformers. AI Open 2022, 3, 111–132. [Google Scholar] [CrossRef]
- Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder. Available online: https://ar5iv.labs.arxiv.org/html/2304.04052 (accessed on 8 March 2024).
- King, J.; Baffour, P.; Crossley, S.; Holbrook, R.; Demkin, M. LLM—Detect AI Generated Text. Available online: https://kaggle.com/competitions/llm-detect-ai-generated-text (accessed on 8 March 2024).
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; 2017; pp. 6000–6010. [Google Scholar]
- Burns, K.; Jain, A.; Go, K.; Xia, F.; Stark, M.; Schaal, S.; Hausman, K. Generating Robot Policy Code for High-Precision and Contact-Rich Manipulation Tasks. arXiv 2023, arXiv:2404.06645. [Google Scholar]
- Gu, Y.; Tinn, R.; Cheng, H.; Lucas, M.; Usuyama, N.; Liu, X.; Naumann, T.; Gao, J.; Poon, H. Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing. Available online: https://arxiv.org/abs/2007.15779v6 (accessed on 8 March 2024).
- Gupta, A.; Waldron, A. Sharing Google’s Med-PaLM 2 Medical Large Language Model, or LLM. Available online: https://cloud.google.com/blog/topics/healthcare-life-sciences/sharing-google-med-palm-2-medical-large-language-model (accessed on 20 January 2024).
- Wang, H.; Gao, C.; Dantona, C.; Hull, B.; Sun, J. DRG-LLaMA: Tuning LLaMA Model to Predict Diagnosis-Related Group for Hospitalized Patients. Npj Digit. Med. 2024, 7, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Schneider, E.T.R.; de Souza, J.V.A.; Gumiel, Y.B.; Moro, C.; Paraiso, E.C. A GPT-2 Language Model for Biomedical Texts in Portuguese. In Proceedings of the 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), Aveiro, Portugal, 7–9 June 2021; pp. 474–479. [Google Scholar]
- Lehman, E.; Johnson, A. Clinical-T5: Large Language Models Built Using MIMIC Clinical Text. PhysioNet 2023. [Google Scholar] [CrossRef]
- Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models Are Few-Shot Learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–12 December 2020; p. 159. [Google Scholar]
- OpenAI; Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; et al. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
- Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res. 2020, 21, 1–67. [Google Scholar]
- Beltagy, I.; Peters, M.E.; Cohan, A. Longformer: The Long-Document Transformer. arXiv 2020, arXiv:2004.05150. [Google Scholar]
- Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H.W.; Sutton, C.; Gehrmann, S.; et al. PaLM: Scaling Language Modeling with Pathways. J. Mach. Learn. Res. 2023, 24, 1–113. [Google Scholar]
- Taori, R.; Gulrajani, I.; Zhang, T.; Dubois, Y.; Li, X.; Guestrin, C.; Liang, P.; Hashimoto, T. Alpaca: A Strong, Replicable Instruction-Following Mode; Stanford Center for Research on Foundation Models: Stanford, CA, USA, 2023; Available online: https://crfm.stanford.edu/2023/03/13/alpaca.html (accessed on 8 March 2024).
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
- Anil, R.; Dai, A.M.; Firat, O.; Johnson, M.; Lepikhin, D.; Passos, A.; Shakeri, S.; Taropa, E.; Bailey, P.; Chen, Z.; et al. PaLM 2 Technical Report. arXiv 2023, arXiv:2305.10403. [Google Scholar]
- Touvron, H.; Martin, L.; Stone, K.R.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv 2020, arXiv:2307.09288. [Google Scholar]
- Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language Models Are Unsupervised Multitask Learners. Available online: https://openai.com/index/better-language-models/ (accessed on 26 June 2024).
- Jin, D.; Pan, E.; Oufattole, N.; Weng, W.-H.; Fang, H.; Szolovits, P. What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset from Medical Exams. Appl. Sci. 2020, 11, 6421. [Google Scholar] [CrossRef]
- Pal, A.; Umapathi, L.K.; Sankarasubbu, M. MedMCQA: A Large-Scale Multi-Subject Multi-Choice Dataset for Medical Domain Question Answering. In Proceedings of the Machine Learning Research (PMLR), ACM Conference on Health, Inference, and Learning (CHIL), Virtual, 7 April 2022; Volume 174, pp. 248–260. [Google Scholar]
- Jin, Q.; Dhingra, B.; Liu, Z.; Cohen, W.W.; Lu, X. PubMedQA: A Dataset for Biomedical Research Question Answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, 3–7 November 2019; pp. 2567–2577. [Google Scholar]
- Tozzi, C.; Zittrain, J. Introduction. In For Fun and Profit: A History of the Free and Open Source Software Revolution; MIT Press: Cambridge, MA, USA, 2017; Available online: https://ieeexplore.ieee.org/document/8047084 (accessed on 29 May 2024).
- Spirling, A. Why Open-Source Generative AI Models Are an Ethical Way Forward for Science. Nature 2023, 616, 413. [Google Scholar] [CrossRef] [PubMed]
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the Opportunities and Risks of Foundation Models. arXiv 2022, arXiv:2108.07258. [Google Scholar]
- Self-Influence Guided Data Reweighting for Language Model Pre-Training. Available online: https://ar5iv.labs.arxiv.org/html/2311.00913 (accessed on 8 March 2024).
- Solaiman, I.; Dennison, C. Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. In Proceedings of the 35th International Conference on Neural Information Processing Systems, Sydney, Australia, 6–14 December 2021; p. 448. [Google Scholar]
- Prompt Design and Engineering: Introduction and Advanced Methods. Available online: https://ar5iv.labs.arxiv.org/html/2401.14423 (accessed on 8 March 2024).
- Ratner, N.; Levine, Y.; Belinkov, Y.; Ram, O.; Magar, I.; Abend, O.; Karpas, E.; Shashua, A.; Leyton-Brown, K.; Shoham, Y. Parallel Context Windows for Large Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Toronto, ON, Canada, 9–14 July 2023; pp. 6383–6402. [Google Scholar]
- Chen, H.; Pasunuru, R.; Weston, J.; Celikyilmaz, A. Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading. arXiv 2023, arXiv:2310.05029. [Google Scholar]
- Fairness-Guided Few-Shot Prompting for Large Language Models. Available online: https://ar5iv.labs.arxiv.org/html/2303.13217 (accessed on 8 March 2024).
- Skill-Based Few-Shot Selection for In-Context Learning. Available online: https://ar5iv.labs.arxiv.org/html/2305.14210 (accessed on 8 March 2024).
- Extending Context Window of Large Language Models via Position Interpolation. Available online: https://ar5iv.labs.arxiv.org/html/2306.15595 (accessed on 8 March 2024).
- Parallel Context Windows Improve In-Context Learning of Large Language Models. Available online: https://ar5iv.labs.arxiv.org/html/2212.10947 (accessed on 8 March 2024).
- MM-LLMs: Recent Advances in MultiModal Large Language Models. Available online: https://ar5iv.labs.arxiv.org/html/2401.13601 (accessed on 8 March 2024).
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models From Natural Language Supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, Virtual Event, 18–24 July 2021; Volume 139, pp. 8748–8763. [Google Scholar]
- Chen, D.; Chang, A.; Nießner, M. ScanRefer: 3D Object Localization in RGB-DScans Using Natural Language. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 12 November 2020; pp. 202–221. [Google Scholar]
- Liu, J.X.; Yang, Z.; Idrees, I.; Liang, S.; Schornstein, B.; Tellex, S.; Shah, A. Grounding Complex Natural Language Commands for Temporal Tasks in Unseen Environments. In Proceedings of the 7th Conference on Robot Learning, Atlanta, GA, USA, 6–9 November 2023. [Google Scholar]
- Liu, M.; Shen, Y.; Yao, B.M.; Wang, S.; Qi, J.; Xu, Z.; Huang, L. KnowledgeBot: Improving Assistive Robot for Task Completion and Live Interaction via Neuro-Symbolic Reasoning. In Proceedings of the Alexa Prize SimBot Challenge, Virtual Event, 6 April 2023. [Google Scholar]
- Salichs, M.A.; Castro-González, Á.; Salichs, E.; Fernández-Rodicio, E.; Maroto-Gómez, M.; Gamboa-Montero, J.J.; Marques-Villarroya, S.; Castillo, J.C.; Alonso-Martín, F.; Malfaz, M. Mini: A New Social Robot for the Elderly. Int. J. Soc. Robot. 2020, 12, 1231–1249. [Google Scholar] [CrossRef]
- Zhao, X.; Li, M.; Weber, C.; Hafez, M.B.; Wermter, S. Chat with the Environment: Interactive Multimodal Perception Using Large Language Models. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1 October 2023; pp. 3590–3596. [Google Scholar]
- Singh, I.; Blukis, V.; Mousavian, A.; Goyal, A.; Xu, D.; Tremblay, J.; Fox, D.; Thomason, J.; Garg, A. ProgPrompt: Program Generation for Situated Robot Task Planning Using Large Language Models. Auton. Robots 2023, 47, 999–1012. [Google Scholar] [CrossRef]
- Jin, Y.; Li, D.; A, Y.; Shi, J.; Hao, P.; Sun, F.; Zhang, J.; Fang, B. RobotGPT: Robot Manipulation Learning from ChatGPT. IEEE Robot. Autom. Lett. 2023, 9, 2543–2550. [Google Scholar] [CrossRef]
- Obinata, Y.; Kanazawa, N.; Kawaharazuka, K.; Yanokura, I.; Kim, S.; Okada, K.; Inaba, M. Foundation Model Based Open Vocabulary Task Planning and Executive System for General Purpose Service Robots. arXiv 2023, arXiv:2308.03357. [Google Scholar]
- Murali, P.; Steenstra, I.; Yun, H.S.; Shamekhi, A.; Bickmore, T. Improving Multiparty Interactions with a Robot Using Large Language Models. In Proceedings of the Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 19 April 2023; pp. 1–8. [Google Scholar]
- Paiva, A.; Leite, I.; Boukricha, H.; Wachsmuth, I. Empathy in Virtual Agents and Robots: A Survey. ACM Trans. Interact. Intell. Syst. 2017, 7, 11. [Google Scholar] [CrossRef]
- Cherakara, N.; Varghese, F.; Shabana, S.; Nelson, N.; Karukayil, A.; Kulothungan, R.; Farhan, M.A.; Nesset, B.; Moujahid, M.; Dinkar, T.; et al. FurChat: An Embodied Conversational Agent Using LLMs, Combining Open and Closed-Domain Dialogue with Facial Expressions. In Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Prague, Czechia, 11–15 September 2023; pp. 588–592. [Google Scholar]
- Zhang, B.; Soh, H. Large Language Models as Zero-Shot Human Models for Human-Robot Interaction. arXiv 2023, arXiv:2303.03548. [Google Scholar]
- Yang, J.; Chen, X.; Qian, S.; Madaan, N.; Iyengar, M.; Fouhey, D.F.; Chai, J. LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent. arXiv 2023, arXiv:2309.12311. [Google Scholar]
- Yoshida, T.; Masumori, A.; Ikegami, T. From Text to Motion: Grounding GPT-4 in a Humanoid Robot “Alter3”. arXiv 2023, arXiv:2312.06571. [Google Scholar]
- Ahn, M.; Brohan, A.; Brown, N.; Chebotar, Y.; Cortes, O.; David, B.; Finn, C.; Fu, C.; Gopalakrishnan, K.; Hausman, K.; et al. Do As I Can, Not As I Say: Grounding Language in Robotic Affordances. arXiv 2022, arXiv:2204.01691. [Google Scholar]
- Lykov, A.; Tsetserukou, D. LLM-BRAIn: AI-Driven Fast Generation of Robot Behaviour Tree Based on Large Language Model. arXiv 2023, arXiv:2305.19352. [Google Scholar]
- Kubota, A.; Cruz-Sandoval, D.; Kim, S.; Twamley, E.W.; Riek, L.D. Cognitively Assistive Robots at Home: HRI Design Patterns for Translational Science. In Proceedings of the 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Sapporo, Hokkaido, Japan, 7–10 March 2022; pp. 53–62. [Google Scholar]
- Elbeleidy, S.; Rosen, D.; Liu, D.; Shick, A.; Williams, T. Analyzing Teleoperation Interface Usage of Robots in Therapy for Children with Autism. In Proceedings of the ACM Interaction Design and Children Conference, Athens, Greece, 18 May 2021; pp. 112–118. [Google Scholar]
- Louie, W.-Y.G.; Nejat, G. A Social Robot Learning to Facilitate an Assistive Group-Based Activity from Non-Expert Caregivers. Int. J. Soc. Robot. 2020, 12, 1159–1176. [Google Scholar] [CrossRef]
- Mishra, D.; Romero, G.A.; Pande, A.; Nachenahalli Bhuthegowda, B.; Chaskopoulos, D.; Shrestha, B. An Exploration of the Pepper Robot’s Capabilities: Unveiling Its Potential. Appl. Sci. 2024, 14, 110. [Google Scholar] [CrossRef]
- Anderson, P.L.; Lathrop, R.A.; Herrell, S.D.; Webster, R.J. Comparing a Mechanical Analogue With the Da Vinci User Interface: Suturing at Challenging Angles. IEEE Robot. Autom. Lett. 2016, 1, 1060–1065. [Google Scholar] [CrossRef] [PubMed]
- Muradore, R.; Fiorini, P.; Akgun, G.; Barkana, D.E.; Bonfe, M.; Boriero, F.; Caprara, A.; De Rossi, G.; Dodi, R.; Elle, O.J.; et al. Development of a Cognitive Robotic System for Simple Surgical Tasks. Int. J. Adv. Robot. Syst. 2015, 12, 37. [Google Scholar] [CrossRef]
- Łukasik, S.; Tobis, S.; Suwalska, J.; Łojko, D.; Napierała, M.; Proch, M.; Neumann-Podczaska, A.; Suwalska, A. The Role of Socially Assistive Robots in the Care of Older People: To Assist in Cognitive Training, to Remind or to Accompany? Sustainability 2021, 13, 10394. [Google Scholar] [CrossRef]
- Natural Language Robot Programming: NLP Integrated with Autonomous Robotic Grasping. Available online: https://ar5iv.labs.arxiv.org/html/2304.02993 (accessed on 8 March 2024).
- Papadopoulos, I.; Koulouglioti, C.; Lazzarino, R.; Ali, S. Enablers and Barriers to the Implementation of Socially Assistive Humanoid Robots in Health and Social Care: A Systematic Review. BMJ Open 2020, 10, e033096. [Google Scholar] [CrossRef]
- Kim, C.Y.; Lee, C.P.; Mutlu, B. Understanding Large-Language Model (LLM)-Powered Human-Robot Interaction. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI ′24), Boulder, CO, USA, 11–14 March 2024; pp. 371–380. [Google Scholar]
- Emotion Is All You Need?-Boosting ChatGPT Performance with Emotional Stimulus-FlowGPT. Available online: https://flowgpt.com/blog/emoGPT (accessed on 30 January 2024).
- Mishra, C.; Verdonschot, R.; Hagoort, P.; Skantze, G. Real-Time Emotion Generation in Human-Robot Dialogue Using Large Language Models. Front. Robot. AI 2023, 10, 1271610. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.; Hasler, S.; Tanneberg, D.; Ocker, F.; Joublin, F.; Ceravola, A.; Deigmoeller, J.; Gienger, M. LaMI: Large Language Models for Multi-Modal Human-Robot Interaction. In Proceedings of the Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; p. 218. [Google Scholar]
- Townsend, D.; MajidiRad, A. Trust in Human-Robot Interaction Within Healthcare Services: A Review Study. In Proceedings of the Volume 7: 46th Mechanisms and Robotics Conference (MR), St. Louis, MI, USA, 14 August 2022; p. V007T07A030. [Google Scholar]
- Abdi, J.; Al-Hindawi, A.; Ng, T.; Vizcaychipi, M.P. Scoping Review on the Use of Socially Assistive Robot Technology in Elderly Care. BMJ Open 2018, 8, e018815. [Google Scholar] [CrossRef] [PubMed]
- Abbott, R.; Orr, N.; McGill, P.; Whear, R.; Bethel, A.; Garside, R.; Stein, K.; Thompson-Coon, J. How Do “Robopets” Impact the Health and Well-being of Residents in Care Homes? A Systematic Review of Qualitative and Quantitative Evidence. Int. J. Older People Nurs. 2019, 14, e12239. [Google Scholar] [CrossRef] [PubMed]
- Xue, L.; Constant, N.; Roberts, A.; Kale, M.; Al-Rfou, R.; Siddhant, A.; Barua, A.; Raffel, C. mT5: A Massively Multilingual Pre-Trained Text-to-Text Transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, 6–11 June 2021; pp. 483–498. [Google Scholar]
- Zhang, J.; Zhao, Y.; Saleh, M.; Liu, P.J. PEGASUS: Pre-Training with Extracted Gap-Sentences for Abstractive Summarization. arXiv 2020. [Google Scholar] [CrossRef]
- Cañete, J.; Chaperon, G.; Fuentes, R.; Ho, J.-H.; Kang, H.; Pérez, J. Spanish Pre-Trained BERT Model and Evaluation Data. arXiv 2023, arXiv:2308.02976. [Google Scholar]
- Da Vinci Robotic Surgical Systems|Intuitive. Available online: https://www.intuitive.com/en-us/products-and-services/da-vinci (accessed on 9 March 2024).
- ROS: Home. Available online: https://www.ros.org/ (accessed on 21 September 2023).
- Palrobot ARI—The Social and Collaborative Robot. Available online: https://pal-robotics.com/robots/ari/ (accessed on 9 March 2024).
- The World’s Most Advanced Social Robot. Available online: https://furhatrobotics.com/ (accessed on 9 March 2024).
- Radford, A.; Kim, J.W.; Xu, T.; Brockman, G.; McLeavey, C.; Sutskever, I. Robust speech recognition via large-scale weak supervision. In Proceedings of the 40th International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; p. 1182. [Google Scholar]
- Su, H.; Qi, W.; Chen, J.; Yang, C.; Sandoval, J.; Laribi, M.A. Recent Advancements in Multimodal Human–Robot Interaction. Front. Neurorobot. 2023, 17, 1084000. [Google Scholar] [CrossRef] [PubMed]
- Saunderson, S.; Nejat, G. How Robots Influence Humans: A Survey of Nonverbal Communication in Social Human–Robot Interaction. Int. J. Soc. Robot. 2019, 11, 575–608. [Google Scholar] [CrossRef]
- Maurtua, I.; Fernández, I.; Tellaeche, A.; Kildal, J.; Susperregi, L.; Ibarguren, A.; Sierra, B. Natural Multimodal Communication for Human–Robot Collaboration. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417716043. [Google Scholar] [CrossRef]
- Schreiter, T.; Morillo-Mendez, L.; Chadalavada, R.T.; Rudenko, A.; Billing, E.; Magnusson, M.; Arras, K.O.; Lilienthal, A.J. Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver. In Proceedings of the 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, Republic of Korea, 28–31 August 2023; pp. 293–300. [Google Scholar]
- RITA–Reminiscence Interactive Therapy and Activities—mPower. Available online: https://mpowerhealth.eu/impact/reducing-the-digital-divide-connecting-and-empowering/rita-reminiscence-interactive-therapy-and-activities/ (accessed on 27 May 2024).
- Roles and Challenges of Semantic Intelligence in Healthcare Cognitive Computing; Carbonaro, A.; Tiwari, S.; Ortiz-Rodriguez, F.; Janev, V. (Eds.) Studies on the Semantic Web/Ssw; IOS Press: Amsterdam, The Netherlands, 2023; ISBN 978-1-64368-460-4. [Google Scholar]
- National Library of Medicine The Semantic Network. Available online: https://www.nlm.nih.gov/research/umls/new_users/online_learning/OVR_003.html (accessed on 29 January 2024).
- Zhang, H.; Hu, H.; Diller, M.; Hogan, W.R.; Prosperi, M.; Guo, Y.; Bian, J. Semantic Standards of External Exposome Data. Environ. Res. 2021, 197, 111185. [Google Scholar] [CrossRef] [PubMed]
- Aldughayfiq, B.; Ashfaq, F.; Jhanjhi, N.Z.; Humayun, M. Capturing Semantic Relationships in Electronic Health Records Using Knowledge Graphs: An Implementation Using MIMIC III Dataset and GraphDB. Healthcare 2023, 11, 1762. [Google Scholar] [CrossRef] [PubMed]
- Busso, M.; Gonzalez, M.P.; Scartascini, C. On the Demand for Telemedicine: Evidence from the COVID-19 Pandemic. Health Econ. 2022, 31, 1491–1505. [Google Scholar] [CrossRef] [PubMed]
- Chatterjee, A.; Prinz, A.; Riegler, M.A.; Meena, Y.K. An Automatic and Personalized Recommendation Modelling in Activity eCoaching with Deep Learning and Ontology. Sci. Rep. 2023, 13, 10182. [Google Scholar] [CrossRef] [PubMed]
- Barisevičius, G.; Coste, M.; Geleta, D.; Juric, D.; Khodadadi, M.; Stoilos, G.; Zaihrayeu, I. Supporting Digital Healthcare Services Using Semantic Web Technologies. In Proceedings of the 17th International Semantic Web Conference, Monterey, CA, USA, 8–12 October 2018; pp. 291–306. [Google Scholar]
- Yu, W.D.; Jonnalagadda, S.R. Semantic Web and Mining in Healthcare. In Proceedings of the HEALTHCOM 2006 8th International Conference on e-Health Networking, Applications and Services, New Delhi, India, 17–19 August 2006; pp. 198–201. [Google Scholar]
- Kara, N.; Dragoi, O.A. Reasoning with Contextual Data in Telehealth Applications. In Proceedings of the Third IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2007), White Plains, NY, USA, 8–10 October 2007; p. 69. [Google Scholar]
- Liu, W.; Daruna, A.; Patel, M.; Ramachandruni, K.; Chernova, S. A Survey of Semantic Reasoning Frameworks for Robotic Systems. Robot. Auton. Syst. 2023, 159, 104294. [Google Scholar] [CrossRef]
- Tang, X.; Zheng, Z.; Li, J.; Meng, F.; Zhu, S.-C.; Liang, Y.; Zhang, M. Large Language Models Are In-Context Semantic Reasoners Rather than Symbolic Reasoners. arXiv 2023, arXiv:2305.14825. [Google Scholar]
- Wen, Y.; Zhang, Y.; Huang, L.; Zhou, C.; Xiao, C.; Zhang, F.; Peng, X.; Zhan, W.; Sui, Z. Semantic Modelling of Ship Behavior in Harbor Based on Ontology and Dynamic Bayesian Network. ISPRS Int. J. Geo-Inf. 2019, 8, 107. [Google Scholar] [CrossRef]
- Zheng, W.; Liu, X.; Ni, X.; Yin, L.; Yang, B. Improving Visual Reasoning Through Semantic Representation. IEEE Access 2021, 9, 91476–91486. [Google Scholar] [CrossRef]
- Pise, A.A.; Vadapalli, H.; Sanders, I. Relational Reasoning Using Neural Networks: A Survey. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2021, 29, 237–258. [Google Scholar] [CrossRef]
- Li, K.; Hopkins, A.K.; Bau, D.; Viégas, F.; Pfister, H.; Wattenberg, M. Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task. arXiv 2023, arXiv:2210.13382. [Google Scholar]
- Available online: https://everydayrobots.com (accessed on 23 February 2024).
- Peng, S.; Genova, K.; Jiang, C.; Tagliasacchi, A.; Pollefeys, M.; Funkhouser, T. OpenScene: 3D Scene Understanding with Open Vocabularies. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 815–824. [Google Scholar]
- Boston Dynamics Spot|Boston Dynamics. Available online: https://bostondynamics.com/products/spot/ (accessed on 24 February 2024).
- Avgousti, S.; Christoforou, E.G.; Panayides, A.S.; Voskarides, S.; Novales, C.; Nouaille, L.; Pattichis, C.S.; Vieyres, P. Medical Telerobotic Systems: Current Status and Future Trends. Biomed. Eng. OnLine 2016, 15, 96. [Google Scholar] [CrossRef] [PubMed]
- Yang, G.; Lv, H.; Zhang, Z.; Yang, L.; Deng, J.; You, S.; Du, J.; Yang, H. Keep Healthcare Workers Safe: Application of Teleoperated Robot in Isolation Ward for COVID-19 Prevention and Control. Chin. J. Mech. Eng. 2020, 33, 47. [Google Scholar] [CrossRef]
- Battaglia, E.; Boehm, J.; Zheng, Y.; Jamieson, A.R.; Gahan, J.; Fey, A.M. Rethinking Autonomous Surgery: Focusing on Enhancement Over Autonomy. Eur. Urol. Focus 2021, 7, 696–705. [Google Scholar] [CrossRef] [PubMed]
- Leonard, S.; Wu, K.L.; Kim, Y.; Krieger, A.; Kim, P.C.W. Smart Tissue Anastomosis Robot (STAR): A Vision-Guided Robotics System for Laparoscopic Suturing. IEEE Trans. Biomed. Engineering 2014, 61, 1305–1317. Available online: https://ieeexplore.ieee.org/document/6720152 (accessed on 27 May 2024). [CrossRef] [PubMed]
- Takada, C.; Suzuki, T.; Afifi, A.; Nakaguchi, T. Hybrid Tracking and Matching Algorithm for Mosaicking Multiple Surgical Views. In Computer-Assisted and Robotic Endoscopy; Peters, T., Yang, G.-Z., Navab, N., Mori, K., Luo, X., Reichl, T., McLeod, J., Eds.; Springer International Publishing: Athens, Greece, 17 October 2017; pp. 24–35. [Google Scholar]
- Afifi, A.; Takada, C.; Yoshimura, Y.; Nakaguchi, T. Real-Time Expanded Field-of-View for Minimally Invasive Surgery Using Multi-Camera Visual Simultaneous Localization and Mapping. Sensors 2021, 21, 2106. [Google Scholar] [CrossRef] [PubMed]
- Lamini, C.; Benhlima, S.; Elbekri, A. Genetic Algorithm Based Approach for Autonomous Mobile Robot Path Planning. Procedia Comput. Sci. 2018, 127, 180–189. [Google Scholar] [CrossRef]
- Xiang, D.; Lin, H.; Ouyang, J.; Huang, D. Combined Improved A* and Greedy Algorithm for Path Planning of Multi-Objective Mobile Robot|Scientific Reports. Sci. Rep. 2022, 12, 13273. Available online: https://www.nature.com/articles/s41598-022-17684-0 (accessed on 10 March 2024). [CrossRef] [PubMed]
- de Sales Guerra Tsuzuki, M.; de Castro Martins, T.; Takase, F.K. Robot Path Planning Using Simulated Annealing—ScienceDirect. IFAC Proc. Vol. 2006, 39, 175–180. Available online: https://www.sciencedirect.com/science/article/pii/S1474667015358250 (accessed on 10 March 2024). [CrossRef]
- End-to-End Deep Learning-Based Framework for Path Planning and Collision Checking: Bin Picking Application. Available online: https://ar5iv.labs.arxiv.org/html/2304.00119 (accessed on 3 March 2024).
- Quinones-Ramirez, M.; Rios-Martinez, J.; Uc-Cetina, V. Robot Path Planning Using Deep Reinforcement Learning. arXiv 2023, arXiv:2302.09120. [Google Scholar]
- Nicola, F.; Fujimoto, Y.; Oboe, R. A LSTM Neural Network applied to Mobile Robots Path Planning. In Proceedings of the IEEE International Conference on Industrial Informatics (INDIN), Porto, Portugal, 18–20 July 2018; pp. 349–354. [Google Scholar]
- Hjeij, M.; Vilks, A. A Brief History of Heuristics: How Did Research on Heuristics Evolve? Humanit. Soc. Sci. Commun. 2023, 10, 64. [Google Scholar] [CrossRef]
- Kawaguchi, K.; Kaelbling, L.; Bengio, Y. Generalization in Deep Learning. In Mathematical Aspects of Deep Learning; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
- On the Generalization Mystery in Deep Learning. Available online: https://ar5iv.labs.arxiv.org/html/2203.10036 (accessed on 10 March 2024).
- Understanding LLMs: A Comprehensive Overview from Training to Inference. Available online: https://ar5iv.labs.arxiv.org/html/2401.02038 (accessed on 3 May 2024).
- ADaPT: As-Needed Decomposition and Planning with Language Models. Available online: https://arxiv.org/abs/2311.05772 (accessed on 10 March 2024).
- Wang, J.; Wu, Z.; Li, Y.; Jiang, H.; Shu, P.; Shi, E.; Hu, H.; Ma, C.; Liu, Y.; Wang, X.; et al. Large Language Models for Robotics: Opportunities, Challenges, and Perspectives. Available online: https://arxiv.org/abs/2401.04334v1 (accessed on 3 March 2024).
- Tjomsland, J.; Kalkan, S.; Gunes, H. Mind Your Manners! A Dataset and A Continual Learning Approach for Assessing Social Appropriateness of Robot Actions. Front. Robot. AI 2022, 9, 669420. [Google Scholar] [CrossRef] [PubMed]
- Soh, H.; Pan, S.; Min, C.; Hsu, D. The Transfer of Human Trust in Robot Capabilities across Tasks. In Proceedings of the Robotics: Science and Systems XIV; Robotics: Science and Systems Foundation, Pittsburgh, PA, USA, 26 June 2018. [Google Scholar]
- Soh, H.; Xie, Y.; Chen, M.; Hsu, D. Multi-Task Trust Transfer for Human-Robot Interaction. Sage J. 2020, 39, 233–249. [Google Scholar] [CrossRef]
- Sap, M.; Rashkin, H.; Chen, D.; LeBras, R.; Choi, Y. SocialIQA: Commonsense Reasoning about Social Interactions. arXiv 2019, arXiv:1904.09728. [Google Scholar]
- FRANKA RESEARCH 3. Available online: https://franka.de/ (accessed on 10 March 2024).
- Puig, X.; Ra, K.; Boben, M.; Li, J.; Wang, T.; Fidler, S.; Torralba, A. VirtualHome: Simulating Household Activities Via Programs. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8494–8502. [Google Scholar]
- Renfrow, J. New Robot from Pillo Health, Black + Decker Offers in-Home Monitoring, Medication Dispensing|Fierce Healthcare. Available online: https://www.fiercehealthcare.com/tech/new-robot-offers-home-monitoring-and-medication-dispensing (accessed on 3 May 2024).
- Speech-to-Text AI: Speech Recognition and Transcription|Google Cloud. Available online: https://cloud.google.com/speech-to-text (accessed on 9 March 2024).
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Zhou, X.; Girdhar, R.; Joulin, A.; Krähenbühl, P.; Misra, I. Detecting Twenty-Thousand Classes Using Image-Level Supervision. In Proceedings of the Computer Vision–ECCV 2022, Tel Aviv, Israel, 23–27 October 2022; pp. 350–368. [Google Scholar]
- Tobeta, M.; Sawada, Y.; Zheng, Z.; Takamuku, S.; Natori, N. E2Pose: Fully Convolutional Networks for End-to-End Multi-Person Pose Estimation. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 532–537. [Google Scholar]
- Machine, A. Android Alter3. Available online: http://alternativemachine.co.jp/en/project/alter3/ (accessed on 4 March 2024).
- Shi, H.; Ball, L.; Thattai, G.; Zhang, D.; Hu, L.; Gao, Q.; Shakiah, S.; Gao, X.; Padmakumar, A.; Yang, B.; et al. Alexa, Play with Robot: Introducing the First Alexa Prize SimBot Challenge on Embodied AI. arXiv 2023, arXiv:2308.05221. [Google Scholar]
- Gao, Q.; Thattai, G.; Shakiah, S.; Gao, X.; Pansare, S.; Sharma, V.; Sukhatme, G.; Shi, H.; Yang, B.; Zheng, D.; et al. Alexa Arena: A User-Centric Interactive Platform for Embodied AI. arXiv 2023. [Google Scholar] [CrossRef]
- Ethics of Care in Technology-mediated Healthcare Practices: A Scoping Review-Ramvi-2023-Scandinavian Journal of Caring Sciences-Wiley Online Library. Available online: https://onlinelibrary.wiley.com/doi/full/10.1111/scs.13186 (accessed on 10 March 2024).
- Ethical Implications of AI and Robotics in Healthcare: A Review-PMC. Available online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10727550/ (accessed on 10 March 2024).
- The Value and Importance of Health Information Privacy-Beyond the HIPAA Privacy Rule-NCBI Bookshelf. Available online: https://www.ncbi.nlm.nih.gov/books/NBK9579/ (accessed on 10 March 2024).
- Harrer, S. Attention Is Not All You Need: The Complicated Case of Ethically Using Large Language Models in Healthcare and Medicine. eBioMedicine 2023, 90, 104512. [Google Scholar] [CrossRef] [PubMed]
- Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S.S.; Wei, J.; Chung, H.W.; Scales, N.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; et al. Large Language Models Encode Clinical Knowledge. Nature 2023, 620, 172–180. [Google Scholar] [CrossRef] [PubMed]
- Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, 3 March 2021; pp. 610–623. [Google Scholar]
- Lareyre, F.; Raffort, J. Ethical Concerns Regarding the Use of Large Language Models in Healthcare. EJVES Vasc. Forum 2023, 61, 1. [Google Scholar] [CrossRef] [PubMed]
- Li, H.; Moon, J.T.; Purkayastha, S.; Celi, L.A.; Trivedi, H.; Gichoya, J.W. Ethics of Large Language Models in Medicine and Medical Research. Lancet Digit. Health 2023, 5, e333–e335. [Google Scholar] [CrossRef] [PubMed]
- Jeyaraman, M.; Balaji, S.; Jeyaraman, N.; Yadav, S. Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare. Cureus 2023, 15, e43262. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.; Liu, S.; Yang, H.; Guo, J.; Wu, Y.; Liu, J. Ethical Considerations of Using ChatGPT in Health Care. J. Med. Internet Res. 2023, 25, e48009. [Google Scholar] [CrossRef] [PubMed]
- Stahl, B.C.; Eke, D. The Ethics of ChatGPT–Exploring the Ethical Issues of an Emerging Technology. Int. J. Inf. Manag. 2024, 74, 102700. [Google Scholar] [CrossRef]
- Oniani, D.; Hilsman, J.; Peng, Y.; Poropatich, R.K.; Pamplin, J.C.; Legault, G.L.; Wang, Y. Adopting and Expanding Ethical Principles for Generative Artificial Intelligence from Military to Healthcare. Npj Digit. Med. 2023, 6, 225. [Google Scholar] [CrossRef] [PubMed]
- Clusmann, J.; Kolbinger, F.R.; Muti, H.S.; Carrero, Z.I.; Eckardt, J.-N.; Laleh, N.G.; Löffler, C.M.L.; Schwarzkopf, S.-C.; Unger, M.; Veldhuizen, G.P.; et al. The Future Landscape of Large Language Models in Medicine. Commun. Med. 2023, 3, 141. [Google Scholar] [CrossRef] [PubMed]
- Murphy, K.; Di Ruggiero, E.; Upshur, R.; Willison, D.J.; Malhotra, N.; Cai, J.C.; Malhotra, N.; Lui, V.; Gibson, J. Artificial Intelligence for Good Health: A Scoping Review of the Ethics Literature. BMC Med. Ethics 2021, 22, 14. [Google Scholar] [CrossRef] [PubMed]
- Sharkey, A.; Sharkey, N. Granny and the Robots: Ethical Issues in Robot Care for the Elderly. Ethics Inf. Technol. 2012, 14, 27–40. [Google Scholar] [CrossRef]
- Siqueira-Batista, R.; Souza, C.R.; Maia, P.M.; Siqueira, S.L. ROBOTIC SURGERY: BIOETHICAL ASPECTS. ABCD Arq. Bras. Cir. Dig. São Paulo 2016, 29, 287–290. [Google Scholar] [CrossRef] [PubMed]
- O’Brolcháin, F. Robots and People with Dementia: Unintended Consequences and Moral Hazard. Nurs. Ethics 2019, 26, 962–972. [Google Scholar] [CrossRef] [PubMed]
- House of Lords. AI in the UK: Ready, Willing and Able. 2017. Available online: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf (accessed on 10 March 2024).
- Decker, M. Caregiving Robots and Ethical Reflection: The Perspective of Interdisciplinary Technology Assessment. AI Soc. 2008, 22, 315–330. [Google Scholar] [CrossRef]
- Coeckelbergh, M.; Pop, C.; Simut, R.; Peca, A.; Pintea, S.; David, D.; Vanderborght, B. A Survey of Expectations About the Role of Robots in Robot-Assisted Therapy for Children with ASD: Ethical Acceptability, Trust, Sociability, Appearance, and Attachment. Sci. Eng. Ethics 2016, 22, 47–65. [Google Scholar] [CrossRef]
- Feil-Seifer, D.; Matarić, M.J. Socially Assistive Robotics. IEEE Robot. Autom. Mag. 2011, 18, 24–31. [Google Scholar] [CrossRef]
- Luxton, D.D. Recommendations for the Ethical Use and Design of Artificial Intelligent Care Providers. Artif. Intell. Med. 2014, 62, 1–10. [Google Scholar] [CrossRef] [PubMed]
- Nielsen, S.; Langensiepen, S.; Madi, M.; Elissen, M.; Stephan, A.; Meyer, G. Implementing Ethical Aspects in the Development of a Robotic System for Nursing Care: A Qualitative Approach. BMC Nurs. 2022, 21, 180. [Google Scholar] [CrossRef]
- Yasuhara, Y. Expectations and Ethical Dilemmas Concerning Healthcare Communication Robots in Healthcare Settings: A Nurse’s Perspective. In Information Systems-Intelligent Information Processing Systems, Natural Language Processing, Affective Computing and Artificial Intelligence, and an Attempt to Build a Conversational Nursing Robot; IntechOpen: London, UK, 2021; ISBN 978-1-83962-360-8. [Google Scholar]
- Chatila, R.; Havens, J.C. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. In Robotics and Well-Being; Aldinhas Ferreira, M.I., Silva Sequeira, J., Singh Virk, G., Tokhi, M.O., Kadar, E., Eds.; Intelligent Systems, Control and Automation: Science and Engineering; Springer International Publishing: Cham, Switzerland, 2019; Volume 95, pp. 11–16. ISBN 978-3-030-12523-3. [Google Scholar]
- The Toronto Declaration. Available online: https://www.torontodeclaration.org/declaration-text/english/ (accessed on 26 June 2024).
- AI Universal Guidelines–Thepublicvoice.Org. Available online: https://thepublicvoice.org/ai-universal-guidelines/ (accessed on 26 June 2024).
- Chakraborty, A.; Karhade, M. Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis. Available online: https://arxiv.org/abs/2406.08695v1 (accessed on 24 June 2024).
- Birhane, A.; Kasirzadeh, A.; Leslie, D.; Wachter, S. Science in the Age of Large Language Models. Nat. Rev. Phys. 2023, 5, 277–280. [Google Scholar] [CrossRef]
- Browning, J.; LeCun, Y. Language, Common Sense, and the Winograd Schema Challenge. Artif. Intell. 2023, 325, 104031. [Google Scholar] [CrossRef]
- Abdulsaheb, J.A.; Kadhim, D.J. Classical and Heuristic Approaches for Mobile Robot Path Planning: A Survey. Robotics 2023, 12, 93. [Google Scholar] [CrossRef]
- Müller, V.C. Ethics of Artificial Intelligence and Robotics. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Nodelman, U., Eds.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2023. [Google Scholar]
- Clanahan, J.M.; Awad, M.M. How Does Robotic-Assisted Surgery Change OR Safety Culture? AMA J. Ethics 2023, 25, 615–623. [Google Scholar] [CrossRef]
- Ullah, E.; Parwani, A.; Baig, M.M.; Singh, R. Challenges and Barriers of Using Large Language Models (LLM) Such as ChatGPT for Diagnostic Medicine with a Focus on Digital Pathology–A Recent Scoping Review. Diagn. Pathol. 2024, 19, 43. [Google Scholar] [CrossRef] [PubMed]
- Lown, B.A.; Rosen, J.; Marttila, J. An Agenda For Improving Compassionate Care: A Survey Shows About Half Of Patients Say Such Care Is Missing. Health Aff. 2011, 30, 1772–1778. [Google Scholar] [CrossRef]
- shanepeckham Getting Started with LLM Prompt Engineering. Available online: https://learn.microsoft.com/en-us/ai/playbook/technology-guidance/generative-ai/working-with-llms/prompt-engineering (accessed on 28 May 2024).
- Wang, L.; Chen, X.; Deng, X.; Wen, H.; You, M.; Liu, W.; Li, Q.; Li, J. Prompt Engineering in Consistency and Reliability with the Evidence-Based Guideline for LLMs. Npj Digit. Med. 2024, 7, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Marson, S.M.; Powell, R.M. Goffman and the Infantilization of Elderly Persons: A Theory in Development. J. Sociol. Soc. Welf. 2014, 41, 143–158. [Google Scholar] [CrossRef]
- Hendrycks, D.; Burns, C.; Basart, S.; Zou, A.; Mazeika, M.; Song, D.; Steinhardt, J. Measuring Massive Multitask Language Understanding. In Proceedings of the International Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
- Naveed, H.; Khan, A.U.; Qiu, S.; Saqib, M.; Anwar, S.; Usman, M.; Akhtar, N.; Barnes, N.; Mian, A. A Comprehensive Overview of Large Language Models. arXiv 2024, arXiv:2307.06435. [Google Scholar]
- Busselle, R.; Reagan, J.; Pinkleton, B.; Jackson, K. Factors Affecting Internet Use in a Saturated-Access Population. Telemat. Inform. 1999, 16, 45–58. [Google Scholar] [CrossRef]
- Ali, M.R.; Lawson, C.A.; Wood, A.M.; Khunti, K. Addressing Ethnic and Global Health Inequalities in the Era of Artificial Intelligence Healthcare Models: A Call for Responsible Implementation. J. R. Soc. Med. 2023, 116, 260–262. [Google Scholar] [CrossRef] [PubMed]
- Johnmaeda Prompt Engineering with Semantic Kernel. Available online: https://learn.microsoft.com/en-us/semantic-kernel/prompts/ (accessed on 28 May 2024).
- Das, B.C.; Amini, M.H.; Wu, Y. Security and Privacy Challenges of Large Language Models: A Survey. arXiv 2024, arXiv:2402.00888. [Google Scholar]
- Mireshghallah, N.; Kim, H.; Zhou, X.; Tsvetkov, Y.; Sap, M.; Shokri, R.; Choi, Y. Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory. In Proceedings of the The Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
- OpenAI Privacy Policy. Available online: https://openai.com/policies/privacy-policy/ (accessed on 28 May 2024).
- OpenAI How Your Data Is Used to Improve Model Performance|OpenAI Help Center. Available online: https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance (accessed on 13 May 2024).
- Liu, L.; Ning, L. USER-LLM: Efficient LLM Contextualization with User Embeddings. Available online: http://research.google/blog/user-llm-efficient-llm-contextualization-with-user-embeddings/ (accessed on 28 May 2024).
- Staab, R.; Vero, M.; Balunovi’c, M.; Vechev, M.T. Beyond Memorization: Violating Privacy Via Inference with Large Language Models. In Proceedings of the The Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
- Chen, Y.; Lent, H.; Bjerva, J. Text Embedding Inversion Security for Multilingual Language Models. arXiv 2024, arXiv:2401.12192. [Google Scholar]
- Zhang, Y.; Jia, R.; Pei, H.; Wang, W.; Li, B.; Song, D. The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Event, 13–19 June 2020; pp. 250–258. [Google Scholar]
- Morris, J.X.; Zhao, W.; Chiu, J.T.; Shmatikov, V.; Rush, A.M. Language Model Inversion. In Proceedings of the The Twelfth International Conference on Learning Representations, Vienna, Austria, 7 May 2024. [Google Scholar]
- Liu, Y.; Palmieri, L.; Koch, S.; Georgievski, I.; Aiello, M. DELTA: Decomposed Efficient Long-Term Robot Task Planning Using Large Language Models. arXiv 2024, arXiv:2404.03275. [Google Scholar]
- Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. LLaMA: Open and Efficient Foundation Language Models. arXiv 2023, arXiv:2302.13971. [Google Scholar]
- BTGenBot: Behavior Tree Generation for Robotic Tasks with Lightweight LLMs. Available online: https://ar5iv.labs.arxiv.org/html/2403.12761 (accessed on 14 May 2024).
- OpenAI OpenAI Platform. Available online: https://platform.openai.com (accessed on 8 March 2024).
- Montreuil, V.; Clodic, A.; Ransan, M.; Alami, R. Planning Human Centered Robot Activities. In Proceedings of the 2007 IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007; pp. 2618–2623. [Google Scholar]
- Son, T.C.; Pontelli, E.; Balduccini, M.; Schaub, T. Answer Set Planning: A Survey. Theory Pract. Log. Program. 2023, 23, 226–298. [Google Scholar] [CrossRef]
- Gebser, M.; Kaminski, R.; Kaufmann, B.; Schaub, T. Clingo = ASP + Control: Preliminary Report. arXiv 2014, arXiv:1405.3694. [Google Scholar]
- Wang, J.; Shi, E.; Yu, S.; Wu, Z.; Ma, C.; Dai, H.; Yang, Q.; Kang, Y.; Wu, J.; Hu, H.; et al. Prompt Engineering for Healthcare: Methodologies and Applications. arXiv 2024, arXiv:2304.14670. [Google Scholar]
- Xiao, G.; Tian, Y.; Chen, B.; Han, S.; Lewis, M. Efficient Streaming Language Models with Attention Sinks. In Proceedings of the International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
- Hooper, C.; Kim, S.; Mohammadzadeh, H.; Mahoney, M.W.; Shao, Y.S.; Keutzer, K.; Gholami, A. KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization. arXiv 2024, arXiv:2401.18079. [Google Scholar]
- Papers with Code—HellaSwag Benchmark (Sentence Completion). Available online: https://paperswithcode.com/sota/sentence-completion-on-hellaswag (accessed on 28 May 2024).
- Minaee, S.; Mikolov, T.; Nikzad, N.; Chenaghlu, M.A.; Socher, R.; Amatriain, X.; Gao, J. Large Language Models: A Survey. arXiv 2024. [Google Scholar] [CrossRef]
- Rights (OCR), O. for C. Summary of the HIPAA Privacy Rule. Available online: https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html (accessed on 20 May 2024).
- General Data Protection Regulation (GDPR)–Legal Text. Available online: https://gdpr-info.eu/ (accessed on 20 May 2024).
- Canada, O. of the P.C. of PIPEDA Requirements in Brief. Available online: https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/pipeda_brief/ (accessed on 20 May 2024).
- Han, X.; You, Q.; Liu, Y.; Chen, W.; Zheng, H.; Mrini, K.; Lin, X.; Wang, Y.; Zhai, B.; Yuan, J. InfiMM-Eval: Complex Open-Ended Reasoning Evaluation for Multi-Modal Large Language Models. arXiv 2023, arXiv:2311.11567. [Google Scholar]
- Seo, G.; Park, S.; Lee, M. How to Calculate the Life Cycle of High-Risk Medical Devices for Patient Safety. Front. Public Health 2022, 10, 989320. [Google Scholar] [CrossRef]
- Javaid, M.; Estivill-Castro, V. Explanations from a Robotic Partner Build Trust on the Robot’s Decisions for Collaborative Human-Humanoid Interaction. Robotics 2021, 10, 51. [Google Scholar] [CrossRef]
- How Should AI Systems Behave, and Who Should Decide? Available online: https://openai.com/index/how-should-ai-systems-behave/ (accessed on 24 June 2024).
- Altman, S.; Brockman, G.; Sutskever, I. Governance of Superintelligence. Available online: https://openai.com/index/governance-of-superintelligence/ (accessed on 24 June 2024).
- Leike, J.; Sutskever, I. Introducing Superalignment. Available online: https://openai.com/index/introducing-superalignment/ (accessed on 24 June 2024).
- Raval, V.; Shah, S. The Practical Aspect: Privacy Compliance—A Path to Increase Trust in Technology. ISACA 2020, 6, 15–19. [Google Scholar]
- Lin, T.-Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
- Krishna, R.; Zhu, Y.; Groth, O.; Johnson, J.; Hata, K.; Kravitz, J.; Chen, S.; Kalantidis, Y.; Li, L.-J.; Shamma, D.A.; et al. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. Int. J. Comput. Vis. 2017, 123, 32–73. [Google Scholar] [CrossRef]
- Su, Y.; Lan, T.; Li, H.; Xu, J.; Wang, Y.; Cai, D. PandaGPT: One Model To Instruction-Follow Them All. arXiv 2023, arXiv:2305.16355. [Google Scholar]
- Sunnybrook Hospital Patient and Visitor Recording Policy-Sunnybrook Hospital. Available online: https://sunnybrook.ca/content/?page=privacy-recording-policy (accessed on 28 May 2024).
- Bello, S.A.; Yu, S.; Wang, C. Review: Deep Learning on 3D Point Clouds. Remote. Sens. 2020, 12, 1729. [Google Scholar] [CrossRef]
- Xu, R.; Wang, X.; Wang, T.; Chen, Y.; Pang, J.; Lin, D. PointLLM: Empowering Large Language Models to Understand Point Clouds. arXiv 2023, arXiv:abs/2308.16911. [Google Scholar]
- Robinson, F.; Nejat, G. A Deep Learning Human Activity Recognition Framework for Socially Assistive Robots to Support Reablement of Older Adults. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; IEEE: London, UK, 2023; pp. 6160–6167. [Google Scholar]
- Meta Meta Llama 2. Available online: https://llama.meta.com/llama2/ (accessed on 28 May 2024).
- Colossal-AI One Half-Day of Training Using a Few Hundred Dollars Yields Similar Results to Mainstream Large Models, Open-Source and Commercial-Free Domain-Specific LLM Solution. Available online: https://hpc-ai.com/blog/one-half-day-of-training-using-a-few-hundred-dollars-yields-similar-results-to-mainstream-large-models-open-source-and-commercial-free-domain-specific-llm-solution (accessed on 28 May 2024).
- Zhang, Z.; Zhao, J.; Zhang, Q.; Gui, T.; Huang, X. Unveiling Linguistic Regions in Large Language Models. arXiv 2024, arXiv:2402.14700. [Google Scholar]
- Wen-Yi, A.; Mimno, D. Hyperpolyglot LLMs: Cross-Lingual Interpretability in Token Embeddings. arXiv; 2023. [Google Scholar]
- Mahowald, K.; Ivanova, A.A.; Blank, I.A.; Kanwisher, N.; Tenenbaum, J.B.; Fedorenko, E. Dissociating Language and Thought in Large Language Models. Trends Cogn. Sci. 2024, 28, 517–540. [Google Scholar] [CrossRef] [PubMed]
- Chang, K.; Xu, S.; Wang, C.; Luo, Y.; Xiao, T.; Zhu, J. Efficient Prompting Methods for Large Language Models: A Survey. arXiv 2024, arXiv:2404.01077. [Google Scholar]
- OpenAI Introducing GPTs. Available online: https://openai.com/index/introducing-gpts/ (accessed on 24 June 2024).
- Choi, J.; Yun, J.; Jin, K.; Kim, Y. Multi-News+: Cost-Efficient Dataset Cleansing via LLM-Based Data Annotation. arXiv 2024, arXiv:2404.09682. [Google Scholar]
- Ishibashi, Y.; Shimodaira, H. Knowledge Sanitization of Large Language Models. arXiv 2024, arXiv:2309.11852. [Google Scholar]
- Faraboschi, P.; Giles, E.; Hotard, J.; Owczarek, K.; Wheeler, A. Reducing the Barriers to Entry for Foundation Model Training. arXiv 2024, arXiv:2404.08811. [Google Scholar]
- Guo, M.; Wang, Y.; Yang, Q.; Li, R.; Zhao, Y.; Li, C.; Zhu, M.; Cui, Y.; Jiang, X.; Sheng, S.; et al. Normal Workflow and Key Strategies for Data Cleaning Toward Real-World Data: Viewpoint. Interact. J. Med. Res. 2023, 12, e44310. [Google Scholar] [CrossRef]
- Common Crawl—Overview. Available online: https://commoncrawl.org/overview (accessed on 24 June 2024).
- Chaudhari, S.; Aggarwal, P.; Murahari, V.; Rajpurohit, T.; Kalyan, A.; Narasimhan, K.; Deshpande, A.; da Silva, B.C. RLHF Deciphered: A Critical Analysis of Reinforcement Learning from Human Feedback for LLMs. arXiv 2024, arXiv:2404.08555. [Google Scholar]
- Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating Noise to Sensitivity in Private Data Analysis. In Theory of Cryptography, Proceedings of theThird Theory of Cryptography Conference, TCC 2006; New York, NY, USA, 4–7 March 2006, Springer: Berlin/Heidelberg, Germany, 2006; pp. 265–284. [Google Scholar]
- Lukas, N.; Salem, A.; Sim, R.; Tople, S.; Wutschitz, L.; Zanella-Béguelin, S. Analyzing Leakage of Personally Identifiable Information in Language Models. In Proceedings of the 2023 IEEE Symposium on Security and Privacy (SP), Los Alamitos, CA, USA, 22–24 May 2023; pp. 346–363. [Google Scholar]
- Lample, G.; Ballesteros, M.; Subramanian, S.; Kawakami, K.; Dyer, C. Neural Architectures for Named Entity Recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA, 12–17 June 2016; 2016; pp. 260–270. [Google Scholar]
- Yu, D.; Kairouz, P.; Oh, S.; Xu, Z. Privacy-Preserving Instructions for Aligning Large Language Models. In Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria, 21–27 July 2024. [Google Scholar]
- Li, X.; Tramèr, F.; Liang, P.; Hashimoto, T. Large Language Models Can Be Strong Differentially Private Learners. In Proceedings of the International Conference on Learning Representations, Virtual Event, 25–29 April 2022. [Google Scholar]
- Everything You Need to Know about the “Right to Be Forgotten”. Available online: https://gdpr.eu/right-to-be-forgotten/ (accessed on 24 June 2024).
- Charity, F.H. RITA (Reminiscence/Rehabilitation & Interactive Therapy Activities). Available online: https://www.nhsfife.org/fife-health-charity/what-weve-funded/rita-reminiscencerehabilitation-interactive-therapy-activities/ (accessed on 3 July 2024).
- Gao, Y.; Xiong, Y.; Gao, X.; Jia, K.; Pan, J.; Bi, Y.; Dai, Y.; Sun, J.; Wang, M.; Wang, H. Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv 2024, arXiv:2312.10997. [Google Scholar]
- OpenAI GPT-4V(Ision) System Card. Available online: https://openai.com/index/gpt-4v-system-card/ (accessed on 4 July 2024).
- Koh, W.Q.; Felding, S.A.; Budak, K.B.; Toomey, E.; Casey, D. Barriers and Facilitators to the Implementation of Social Robots for Older Adults and People with Dementia: A Scoping Review. BMC Geriatr. 2021, 21, 351. [Google Scholar] [CrossRef]
- OpenAI API. Available online: https://openai.com/index/openai-api/ (accessed on 4 July 2024).
- YouTube Data API. Available online: https://developers.google.com/youtube/v3 (accessed on 4 July 2024).
- Stockton, T. Organizations Fear Ontario’s Investment to Reduce Surgical Wait Times Will Endanger Patients Because of Nursing Shortage. Capital Current. 2021. Available online: https://capitalcurrent.ca/organizations-fear-ontarios-investment-to-reduce-surgical-wait-times-will-endanger-patients-because-of-nursing-shortage/ (accessed on 4 July 2024).
- Driess, D.; Xia, F.; Sajjadi, M.S.M.; Lynch, C.; Chowdhery, A.; Ichter, B.; Wahid, A.; Tompson, J.; Vuong, Q.; Yu, T.; et al. PaLM-E: An Embodied Multimodal Language Model. In Proceedings of the 40th International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; p. 340. [Google Scholar]
- Iovino, M.; Scukins, E.; Styrud, J.; Ögren, P.; Smith, C. A Survey of Behavior Trees in Robotics and AI. Robot. Auton. Syst. 2022, 154, 104096. [Google Scholar] [CrossRef]
- Cloud Computing Services. Available online: https://cloud.google.com/ (accessed on 5 July 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pashangpour, S.; Nejat, G. The Future of Intelligent Healthcare: A Systematic Analysis and Discussion on the Integration and Impact of Robots Using Large Language Models for Healthcare. Robotics 2024, 13, 112. https://doi.org/10.3390/robotics13080112
Pashangpour S, Nejat G. The Future of Intelligent Healthcare: A Systematic Analysis and Discussion on the Integration and Impact of Robots Using Large Language Models for Healthcare. Robotics. 2024; 13(8):112. https://doi.org/10.3390/robotics13080112
Chicago/Turabian StylePashangpour, Souren, and Goldie Nejat. 2024. "The Future of Intelligent Healthcare: A Systematic Analysis and Discussion on the Integration and Impact of Robots Using Large Language Models for Healthcare" Robotics 13, no. 8: 112. https://doi.org/10.3390/robotics13080112
APA StylePashangpour, S., & Nejat, G. (2024). The Future of Intelligent Healthcare: A Systematic Analysis and Discussion on the Integration and Impact of Robots Using Large Language Models for Healthcare. Robotics, 13(8), 112. https://doi.org/10.3390/robotics13080112