Integrating Large Language Models into Robotic Autonomy

A special issue of AI (ISSN 2673-2688). This special issue belongs to the section "AI Systems: Theory and Applications".

Deadline for manuscript submissions: 31 August 2026 | Viewed by 756

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Engineering, California State University San Bernardino, San Bernardino, CA 92407, USA
Interests: machine learning; computer vision; human behavioral analysis with wireless body/unobtrusive sensors; deep learning technologies; intelligent sensing application development for healthcare; big data; data science; AI in cyber-physical systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Engineering, California State University San Bernardino, San Bernardino, CA 92407, USA
Interests: generative AI; large language models; educational AI; reinforcement learning; medical image analytics; AI in healthcare; human-AI interaction
School of Computer Science and Engineering, California State University San Bernardino, San Bernardino, CA 92407, USA
Interests: machine learning; data mining; optimization; bioinformatics
College of Engineering and Computer Science, California State University Fullerton, Fullerton, CA 92831, USA
Interests: AI-empowered robotics; web/mobile application development; augmented/virtual/mixed reality; reinforcement/imitation/curriculum learning

Special Issue Information

Dear Colleagues,

Recent advancements in large language models (LLMs) are reshaping the landscape of artificial intelligence (AI) and unlocking new possibilities in robotic autonomy. It can be foreseen that LLMs will inspire the transformation of robotic autonomy. In robotics, domain-specific knowledge that includes sensing/perception, computation of kinematic/dynamic-based actions, and control theory is accumulated. LLMs bring a transformative capability to how robots understand, interact with, and navigate their environments, bridging symbolic reasoning with sensorimotor intelligence. LLMs can train robots to think and behave in a human style and coordinate and manage various operations intelligently. This Special Issue invites contributions that explore both theoretical frameworks and real-world applications of LLMs in robotics, especially in domains where voice, vision, and contextual learning converge to achieve complex autonomous behavior. Topics of interest include, but are not limited to, sensor-grounded perception learning, real-time decision-making through voice and text commands, robot-to-human/robot-to-robot interaction, and structured reasoning through multi-level representations.

This Special Issue spotlights exceptional research in integrating large language models into robotic autonomy, emphasizing the cutting-edge advances, innovations, developments, and emerging trends in locomotion, navigation, manipulation, representation, interpretation, and voice-based interaction. High-quality papers addressing both theoretical and practical applications of LLMs on robotic autonomy are welcome.

Dr. Qingquan Sun
Dr. Jennifer Jin
Dr. Xiangyu Li
Dr. Duy Ho
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • LLMs in locomotion control
  • LLMs in navigation and semantic planning
  • LLMs in physical interaction and manipulation
  • voice-based control and communication
  • sensor-grounded perception
  • learning, real-time decision-making through voice and text commands
  • robot-to-human/robot-to-robot interaction
  • structured reasoning through multi-level representations

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 284 KB  
Article
LLM-Based Control for Simulated Physical Reasoning: Modular Evaluation in the NeurIPS Embodied Agent Interface Challenge
by Hilmi Demirhan and Wlodek Zadrozny
AI 2026, 7(4), 131; https://doi.org/10.3390/ai7040131 - 3 Apr 2026
Viewed by 361
Abstract
Benchmark-driven evaluation helps distinguish between planning quality and interface reliability when large language models are utilized for embodied reasoning in simulation. Our submission to the Embodied Agent Interface Challenge (EAI) is evaluated across four stages of the pipeline. These being goal interpretation, subgoal [...] Read more.
Benchmark-driven evaluation helps distinguish between planning quality and interface reliability when large language models are utilized for embodied reasoning in simulation. Our submission to the Embodied Agent Interface Challenge (EAI) is evaluated across four stages of the pipeline. These being goal interpretation, subgoal decomposition, action sequencing, and transition modeling. The tasks run in the BEHAVIOR and VirtualHome simulators, which use constrained action vocabularies, fixed-object inventories and symbolic state representations within a standard evaluation protocol. Our system accesses the OpenAI API using GPT-4.1 for BEHAVIOR, GPT-4.1-mini for VirtualHome, and GPT-5-mini in later exploratory experiments across both environments. The schemas for each task determine how the outputs are structured, and outputs are regenerated when they do not follow the specification. On the final public leaderboard, our system ranked eighteenth overall with a score of 57.92, achieving 68.88 on BEHAVIOR and 46.96 on VirtualHome. In this paper, we describe our approach and discuss what these observations suggest about the strengths and limitations of current language models when used for embodied reasoning. Full article
(This article belongs to the Special Issue Integrating Large Language Models into Robotic Autonomy)
Show Figures

Figure 1

Back to TopTop