Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (543)

Search Parameters:
Keywords = collaborative interface

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4343 KB  
Article
Tribomechanical Behaviour and Elasto-Plastic Contact Response of 3D-Printed Versus Conventional Polymer Inserts in Robotic Gripping Interfaces
by Georgiana Ionela Păduraru, Andrei Călin, Marilena Stoica, Delia Alexandra Prisecaru and Petre Lucian Seiciu
Polymers 2026, 18(7), 891; https://doi.org/10.3390/polym18070891 - 6 Apr 2026
Viewed by 54
Abstract
Three-dimensional printed polymers produced using Fused Deposition Modelling (FDM) exhibit directional microstructures resulting from filament paths, layer interfaces, and cellular infill, leading to mechanical and tribological responses distinct from those of homogeneous bulk materials. This study presents a comparative tribomechanical evaluation of polypropylene [...] Read more.
Three-dimensional printed polymers produced using Fused Deposition Modelling (FDM) exhibit directional microstructures resulting from filament paths, layer interfaces, and cellular infill, leading to mechanical and tribological responses distinct from those of homogeneous bulk materials. This study presents a comparative tribomechanical evaluation of polypropylene (PP) bulk inserts and 3D-printed polyethylene terephthalate glycol (PETG) inserts with a 30% hexagonal infill, relevant for robotic gripping applications. Progressive scratch tests were performed under loads from 5 to 100 N (150 N for PP), and profilometry was applied to quantify groove morphology, ridge formation, and displaced-volume ratios. An elasto-plastic conical indentation model was used to derive indentation pressures and elastic–plastic transition radii from groove geometry. The PETG inserts exhibited heterogeneous groove depth, intermittent ridge tearing, and friction fluctuations associated with the internal infill structure, consistent with previous findings on anisotropy and architecture-dependent behaviour in additively manufactured polymers. In contrast, bulk PP demonstrated smoother friction profiles and more stable plastic flow under increasing loads. Two functional indices—specific frictional work and ridge-to-trace volumetric ratio—are introduced to support material selection for robotic gripping systems. The results show that local contact mechanics in 3D-printed inserts are governed by print-induced structural features and can be effectively evaluated through a scratch-based elasto-plastic analysis. The methods and results presented in this work support the rational selection and design of polymer inserts for robotic gripper fingertips. The proposed scratch-based elasto-plastic evaluation framework enables manufacturers and automation engineers to compare 3D-printed and conventional materials based on friction stability, wear response, and deformation resistance. This approach can be directly applied to optimise gripping performance in industrial handling, packaging, and collaborative robotics. Full article
(This article belongs to the Section Polymer Processing and Engineering)
Show Figures

Figure 1

22 pages, 1697 KB  
Review
From Gut to Green: Cross-Kingdom Adaptation of Human Pathogens in Plant Hosts
by Jamial Hashin Himel, Y. S. Sumaiya, Mrinmoy Kundu, Mahabuba Mostafa and Md. Motaher Hossain
Stresses 2026, 6(2), 18; https://doi.org/10.3390/stresses6020018 - 5 Apr 2026
Viewed by 170
Abstract
Cross-kingdom pathogenesis—human and animal pathogens colonizing and persisting in plants—is transforming our understanding of microbial ecology, food safety, and public health. This review translates incoming research that demonstrates plants as more than mute carriers to dynamic ecological interfaces where human and zoonotic pathogens, [...] Read more.
Cross-kingdom pathogenesis—human and animal pathogens colonizing and persisting in plants—is transforming our understanding of microbial ecology, food safety, and public health. This review translates incoming research that demonstrates plants as more than mute carriers to dynamic ecological interfaces where human and zoonotic pathogens, such as Salmonella enterica, Escherichia coli O157:H7, and Listeria monocytogenes, will adhere, internalize, and, in some cases, potentially evade host defenses. Such pathogens exploit evolutionarily conserved molecular processes like Type III secretion system 1 (TTSS), biofilm formation, quorum sensing, and small RNA-mediated immune sabotage that have allowed them to cross biological kingdom boundaries. To provide an entry point for pathogens, environmental conditions (e.g., contaminated irrigation water, manure application, wildlife access, and mechanical wounding) promote pathogen transfer to and penetration into plant tissues through stomata hydathodes above ground or roots below ground. Once inside, pathogens confront a range of plant immune responses, indigenous microbiota, and abiotic stresses such as UV radiation exposure, nutrient starvation, and osmotic fluctuations. Nonetheless, biofilm production, metabolic versatility, and virulence gene expression contribute to their persistence. Interactions with plant pathogens and microbiomes additionally shape colonization dynamics, for example, through co-survival and niche manipulation. With the acceleration of these processes due to climate change, urbanization, and intensified agriculture, cross-kingdom pathogenesis becomes a rising concern for One Health. Critical knowledge gaps, including seedborne transmission, microbiome engineering, and predictive modeling, are pointed out in the review along with emerging mitigation strategies, including point-of-care diagnostics and microbial biocontrol. In conclusion, this review advocates for interdisciplinary collaboration from microbiology, plant science, and One Health perspectives to predict and mitigate cross-kingdom threats to global food production. Full article
(This article belongs to the Section Plant and Photoautotrophic Stresses)
Show Figures

Figure 1

19 pages, 10048 KB  
Article
How AI-Assisted Decision-Making Paradigms and Explainability Shape Human-AI Collaboration
by Yingying Wang, Qin Ni, Tingjiang Wei, Haoxin Xu, Lu Liu and Liang He
Sustainability 2026, 18(7), 3516; https://doi.org/10.3390/su18073516 - 3 Apr 2026
Viewed by 147
Abstract
The increasing integration of artificial intelligence (AI) in educational decision-making raises a critical question: how to design AI systems that can effectively support teachers while maintaining an appropriate level of trust. Addressing this question requires not only continuous improvements in the technical capabilities [...] Read more.
The increasing integration of artificial intelligence (AI) in educational decision-making raises a critical question: how to design AI systems that can effectively support teachers while maintaining an appropriate level of trust. Addressing this question requires not only continuous improvements in the technical capabilities of AI systems but also an examination from a human-AI interaction perspective of how different system designs influence users’ cognitive performance and affective responses, thereby providing guidance for system optimization and design. Therefore, this study conducted a randomized controlled experiment with 120 pre-service teachers to investigate how AI-assisted decision-making paradigms and AI explainability jointly influence teachers’ task performance and trust in AI, and whether these effects transfer to subsequent independent tasks. The results indicate that the effect of explanatory interface on task performance is context dependent and yields an immediate positive impact. Under the concurrent paradigm, the explanatory interface of the AI system significantly improves immediate task performance, whereas no significant effect is observed under the sequential paradigm. Moreover, this improvement is confined to the task execution stage and does not transfer to subsequent independent tasks. In contrast, the effect of explanatory interface on trust exhibits a delayed and negative pattern. The explanatory interface has no significant impact on situational trust, while it exerts a negative effect on learned trust and suppresses the natural development of both cognitive trust and emotional trust. In addition, different AI-assisted decision-making paradigms exhibit distinct patterns of influence on task performance and trust. Although the concurrent paradigm performs worse than the sequential paradigm in terms of immediate task performance, it is more effective in promoting users’ emotional trust. Overall, these findings extend the theoretical understanding of the mechanisms of explainability in human-AI interaction and provide empirical evidence for the joint design of explainable AI systems and human-AI collaboration paradigms. Full article
(This article belongs to the Special Issue AI for Sustainable and Creative Learning in Education)
Show Figures

Figure 1

33 pages, 2402 KB  
Review
Toward Advanced Sensing and Data-Driven Approaches for Maturity Assessment of Indeterminate Peanut Cropping Systems: Review of Current State and Prospects
by Sathish Raymond Emmanuel Sahayaraj, Abhilash K. Chandel, Pius Jjagwe, Ranadheer Reddy Vennam, Maria Balota and Arunachalam Manimozhian
Sensors 2026, 26(7), 2208; https://doi.org/10.3390/s26072208 - 2 Apr 2026
Viewed by 377
Abstract
Determining the optimal harvest time is among the most critical economic decisions for peanut (Arachis hypogaea L.) growers, directly influencing yield, quality, and market value. Unlike many other crops, peanuts are indeterminate, continuing to flower and produce pods throughout their life cycle. [...] Read more.
Determining the optimal harvest time is among the most critical economic decisions for peanut (Arachis hypogaea L.) growers, directly influencing yield, quality, and market value. Unlike many other crops, peanuts are indeterminate, continuing to flower and produce pods throughout their life cycle. As a result, pod development and maturation are asynchronous, making harvest timing particularly challenging. Conventional maturity estimation techniques, including the hull scrape method, pod blasting, and visual maturity profiling, are invasive, labor-intensive, time-consuming, and spatially limited. Moreover, differences in cultivar maturity rates and agroclimatic conditions exacerbate inconsistencies in maturity prediction. These challenges highlight the urgent need for scalable, objective, and data-driven methods to support growers in achieving optimal harvest outcomes. This review synthesizes the current understanding of peanut pod maturity and evaluates existing traditional and non-invasive approaches for maturity estimation. It aims to identify the limitations of conventional techniques and explore the integration of advanced sensing technologies, artificial intelligence (AI), and geospatial analytics to enhance precision and scalability in peanut maturity assessment and harvest decision-making. This review examines traditional destructive techniques such as the hull scrape method and pod blasting, followed by emerging non-invasive methods employing proximal and remote sensing platforms. Applications of vegetation indices, multispectral and hyperspectral imaging, and AI-based data analytics are discussed in the context of maturity prediction. Additionally, the potential of multimodal remote sensing data fusion and digital frameworks integrating spatial big data analytics, centralized data management, and cloud-based graphical interfaces is explored as a pathway toward end-to-end decision-support systems. Recent advances in non-invasive sensing and AI-assisted modeling have demonstrated significant improvements in scalability, precision, and automation compared with traditional manual approaches. However, their effectiveness remains constrained by the limited inclusion of agroclimatic, phenological, and cultivar-specific variables. Furthermore, the translation of model outputs into actionable, field-level harvest decisions is still underdeveloped, underscoring the need for integrated, user-centric digital infrastructure. Achieving a robust and transferable digital peanut maturity estimation system will require comprehensive ground-truth data across cultivars, regions, and growing seasons. Multidisciplinary collaborations among agronomists, data scientists, growers, and technology providers will be essential for developing practical, field-ready solutions. Integrating AI, multimodal sensing, and geospatial analytics holds immense potential to transform peanut maturity estimation. Such innovations promise to enhance harvest precision, economic returns, and sustainability while reducing manual effort and uncertainty, ultimately improving the efficiency and quality of life for peanut producers worldwide. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2026)
Show Figures

Figure 1

23 pages, 49319 KB  
Article
iLog 2.2: Volume and Nutrition Estimation for Mixed Foods via Mask R-CNN and Federated Learning
by Indira Devi Siripurapu, Laavanya Rachakonda, Saraju P. Mohanty and Elias Kougianos
Electronics 2026, 15(7), 1460; https://doi.org/10.3390/electronics15071460 - 1 Apr 2026
Viewed by 229
Abstract
Accurately estimating calorie intake and nutrient composition from what we eat remains one of the most practical challenges in maintaining a healthy lifestyle. Manual food logging and database-based estimations are often inaccurate because ingredient proportions and preparation styles vary widely. This paper presents [...] Read more.
Accurately estimating calorie intake and nutrient composition from what we eat remains one of the most practical challenges in maintaining a healthy lifestyle. Manual food logging and database-based estimations are often inaccurate because ingredient proportions and preparation styles vary widely. This paper presents a lightweight, privacy-preserving framework that estimates calories and detailed nutrient values from a single image. The model uses a Mask R-CNN-based segmentation network to identify visible food components, measure their area, estimate their volume using preset height values, and map them to nutritional information obtained from reliable datasets such as USDA and Food-a-pedia. The system integrates federated learning (FL) to ensure privacy by allowing the model to improve collaboratively without sharing raw user data. The proposed architecture achieved a mean Average Precision (mAP) of 96% for detection and 92% for segmentation, confirming its precision and efficiency. The model is trained and evaluated on a curated pizza dataset consisting of 1107 images across 50 topping categories, using a standard train-validation-test split (666/219/222) to ensure reliable performance assessment. The proposed system also achieves low nutrition estimation error, with calorie and nutrient deviations remaining within approximately 3.8% to 11.1% across evaluated metrics. A lightweight mobile interface is demonstrated through a Figma-based prototype mockup to illustrate potential real-world deployment and user interaction. Full article
Show Figures

Figure 1

23 pages, 7096 KB  
Article
Research and Application of Functional Model Construction Method for Production Equipment Operation Management and Control Oriented to Diversified and Personalized Scenarios
by Jun Li, Keqin Dou, Jinsong Liu, Qing Li and Yong Zhou
Machines 2026, 14(4), 368; https://doi.org/10.3390/machines14040368 - 27 Mar 2026
Viewed by 274
Abstract
As complex system engineering involving multiple stakeholders, multi-objective collaboration, and multi-spatiotemporal scales, the components, logical structure, and functional mechanisms of production equipment operation management and control (PEOMC) can be generalized through functional modelling to support dynamic analysis and intelligent decision-making of PEOMC in [...] Read more.
As complex system engineering involving multiple stakeholders, multi-objective collaboration, and multi-spatiotemporal scales, the components, logical structure, and functional mechanisms of production equipment operation management and control (PEOMC) can be generalized through functional modelling to support dynamic analysis and intelligent decision-making of PEOMC in the industrial internet environment. To address the diversity of scenarios and objectives of PEOMC, a hierarchical construction method for the functional model of PEOMC based on IDEF0 is proposed. By analysing relevant international standards, such as ISO 55010, ISO/IEC 62264, and OSA-CBM, the generic functional modules for the first and second layers of the functional model are identified and defined. On the basis of semi-supervised machine learning, topic clustering is used to extract the components, functional mechanisms, and logical relationships of production equipment operation management and control from approximately 200 standard texts and to construct a reference resource pool for the third-layer functional module. On this basis, an interface matching and recursive traversal algorithm for functional modules is designed, and a composition and orchestration strategy of functional modules for specific scenarios is provided to support the flexible construction of diversified and personalized PEOMC scenarios. The proposed construction and application method was validated through an engineering case study in an aero-engine transmission unit manufacturing workshop: the average process capability index of the enterprise’s production equipment steadily increased from 1.28 to approximately 1.60, the mean time to repair (MTTR) of production equipment failures significantly decreased from 8 h to 3 h, and the average overall equipment effectiveness (OEE) increased from 56.43% to a stable 68.57%, demonstrating its effectiveness and practicality. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

20 pages, 7287 KB  
Article
Learning How to Live with Risk—The Role of Co-Design for Managing City–Port Thresholds in Castellammare di Stabia, Naples, Italy
by Libera Amenta and Paolo De Martino
Sustainability 2026, 18(7), 3242; https://doi.org/10.3390/su18073242 - 26 Mar 2026
Viewed by 501
Abstract
City–port thresholds are increasingly exposed to multi-risk, including climate change impacts, pollution, and obsolescence of buildings and infrastructure as well as socio-economic marginalization. This paper aims to understand what role co-design—and more generally collaborative planning processes—can play in enabling communities and institutions to [...] Read more.
City–port thresholds are increasingly exposed to multi-risk, including climate change impacts, pollution, and obsolescence of buildings and infrastructure as well as socio-economic marginalization. This paper aims to understand what role co-design—and more generally collaborative planning processes—can play in enabling communities and institutions to learn how to live with risk when managing water, city–port interfaces, and coastal public spaces. To do so, this paper analyses the experience of a co-design workshop held in Castellammare di Stabia, in the Metropolitan Area of Naples, organized within the framework of the research MIRACLE and SPArTaCHus. The results of the workshop show that co-design can act as an effective instrument for developing strategies aimed at the regeneration and valorization of underused, abandoned, or polluted spaces in the coastal thresholds of City–Port areas—wastescapes—that are exposed to multiple risks. In these complex territories new methods are needed to understand, describe and interpret the fuzzy boundaries between the city and the port to collaboratively envision sustainable strategies for urban regeneration of coastal wastescapes. Full article
Show Figures

Figure 1

23 pages, 27743 KB  
Review
A Framework for Safe Mobile Manipulation in Human-Centered Applications
by Pangcheng David Cen Cheng, Cesare Luigi Blengini, Rosario Francesco Cavelli, Angela Ripi and Marina Indri
Robotics 2026, 15(4), 68; https://doi.org/10.3390/robotics15040068 - 25 Mar 2026
Viewed by 371
Abstract
In recent years, applications with robots collaborating actively with humans have been increasing. The transition from Industry 4.0 to 5.0 rearranges the focus of fully automated processes to a human-centered system that allows more customization and flexibility. In human-centered systems, the robot is [...] Read more.
In recent years, applications with robots collaborating actively with humans have been increasing. The transition from Industry 4.0 to 5.0 rearranges the focus of fully automated processes to a human-centered system that allows more customization and flexibility. In human-centered systems, the robot is expected to safely assist or provide support to the human operator, avoiding any unintentional harm, while the latter is focused on tasks that require human reasoning, since current decision-making systems still have some limitations. This survey reviews all the main functionalities required to make a robot (collaborative or not) act as an assistant for human operators, analyzing and comparing solutions proposed by the authors (based on previous works) and/or the ones available in the literature. In this way, it is possible to combine those functionalities and build a complete framework enabling safe mobile manipulation while interacting with humans. In particular, a mobile manipulator is used to receive requests from a user, navigate in a human-shared environment, identify the requested object, and grasp and safely deliver such an object to the user. The framework, which is completed by a user interface designed using Android Studio, is developed in ROS1, tested, and validated on a real mobile manipulator in real-world conditions. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

30 pages, 22493 KB  
Article
H-CoRE: A Cooperative Framework for Heterogeneous Multi-Robot Exploration and Inspection
by Simone D’Angelo, Francesca Pagano, Riccardo Caccavale, Vincenzo Scognamiglio, Alessandro De Crescenzo, Pasquale Merone, Stefano Ciaravino, Alberto Finzi and Vincenzo Lippiello
Drones 2026, 10(4), 232; https://doi.org/10.3390/drones10040232 - 25 Mar 2026
Viewed by 436
Abstract
This paper presents the H-CoRE (Heterogeneous Cooperative Multi-Robot Execution) framework designed to enable autonomous multi-robot operations in GNSS-denied environments. Built on an ROS 2-based architecture, H-CoRE enables collaborative, structured task execution through standardized software stacks. Each robot’s stack combines a high-level executive system [...] Read more.
This paper presents the H-CoRE (Heterogeneous Cooperative Multi-Robot Execution) framework designed to enable autonomous multi-robot operations in GNSS-denied environments. Built on an ROS 2-based architecture, H-CoRE enables collaborative, structured task execution through standardized software stacks. Each robot’s stack combines a high-level executive system with an agent-specific motion layer and leverages multi-sensor fusion for localization and mapping. The framework is inherently reconfigurable, allowing individual agents to operate autonomously or as part of a multi-robot team for collaborative missions. In the considered scenario, the system integrates aerial and ground vehicles, a fixed pan–tilt–zoom camera, and a human supervisory interface within a unified, modular infrastructure. The proposed system has been deployed in indoor, GNSS-denied environments, demonstrating autonomous navigation, cooperative area coverage, and real-time information sharing across multiple agents. Experimental results confirm the effectiveness of H-CoRE in maintaining general awareness and mission continuity, paving the way for future applications in search-and-rescue, inspection, and exploration tasks. Full article
Show Figures

Figure 1

38 pages, 6506 KB  
Review
Systemic Integration of Artificial Intelligence in Financial Project Management: A Systematic Literature Review and BERTopic-Based Analysis
by Styve L. Ndjonkin Simen, Simon P. Philbin and Gordon Hunter
Appl. Syst. Innov. 2026, 9(4), 68; https://doi.org/10.3390/asi9040068 - 24 Mar 2026
Viewed by 353
Abstract
Artificial Intelligence (AI) is increasingly embedded in project management within the financial sector, yet existing research remains fragmented and largely focused on isolated technical applications. A systemic understanding of how AI reshapes financial project management as an integrated socio-technical capability is still lacking. [...] Read more.
Artificial Intelligence (AI) is increasingly embedded in project management within the financial sector, yet existing research remains fragmented and largely focused on isolated technical applications. A systemic understanding of how AI reshapes financial project management as an integrated socio-technical capability is still lacking. This study addresses this gap through a systematic literature review of 62 peer-reviewed articles (2022–2025), combined with BERTopic-based thematic analysis supported by large language model-assisted topic representation. The findings reveal the emergence of Agentic AI as a dominant theme, marking a shift from analytical support tools toward autonomous and collaborative agents embedded in project processes. While predictive analytics and automation are relatively mature, governance-oriented and human-centric dimensions remain underdeveloped and weakly integrated. This study contributes by: (1) presenting a computationally enhanced systematic mapping study that integrates a systematic literature review with BERTopic-based topic modelling to map the evolving research landscape; (2) identifying Agentic AI as a pivotal interface between technical execution and strategic governance; and (3) proposing a socio-technical target architecture that offers a structured roadmap for AI-enabled transformation in financial project management systems. Full article
(This article belongs to the Special Issue AI-Driven Decision Support for Systemic Innovation)
Show Figures

Figure 1

13 pages, 1072 KB  
Article
Supporting Novice Creativity in Design Education Through Human-Centred Explainable AI
by Ahmed Al-sa’di and Dave Miller
Theor. Appl. Ergon. 2026, 2(2), 4; https://doi.org/10.3390/tae2020004 - 24 Mar 2026
Viewed by 189
Abstract
Generative artificial intelligence tools are reshaping design by enabling novice designers to produce professional-quality user interfaces rapidly. However, for novice designers, exposure to AI-generated outputs that are far beyond their capabilities can inhibit creative growth. In this work, we investigate AI overperformance, when [...] Read more.
Generative artificial intelligence tools are reshaping design by enabling novice designers to produce professional-quality user interfaces rapidly. However, for novice designers, exposure to AI-generated outputs that are far beyond their capabilities can inhibit creative growth. In this work, we investigate AI overperformance, when superior AI outputs lower the creative confidence of novices, and explore whether human-centred and explainable AI interfaces can mitigate such effects while sustaining creative agency. We conducted a within-subjects experiment with 75 novice designers using a web-based research platform. Participants completed mobile app design tasks under three conditions: Human-Only (baseline), AI Overmatch (exposure to superior AI outputs), and XAI-Enhanced (exposure to AI outputs with an embedded explainable interface). A repeated-measures ANOVA indicated that creative self-efficacy varied significantly, F = 24.67, p < 0.001, η2 = 0.18. While creative self-efficacy was significantly decreased in the AI Overmatch condition, M = −1.18, SD = 0.32, when compared to the Human-Only conditions, M = 0.08, SD = 0.15, this was significantly increased in the XAI-Enhanced condition, M U= 0.42, SD = 0.18. This also led to a rise in creative performance across both ideation and output quality. The results showed that the AI Overmatch condition significantly reduced creative self-efficacy and originality; however, this negative effect was mitigated by the XAI-Enhanced interface, which enhanced confidence and idea quality. Mediation analysis demonstrated that expectancy disconfirmation explains the negative impact of AI overperformance on human creativity. These findings provide constructive design principles for educational AI tools and contribute to HCI theory by demonstrating that pedagogically oriented, transparent AI supports human–AI collaboration without diminishing human agency. Full article
Show Figures

Figure 1

32 pages, 7914 KB  
Article
UAV Target Detection and Tracking Integrating a Dynamic Brain–Computer Interface
by Jun Wang, Zanyang Li, Lirong Yan, Muhammad Imtiaz, Hang Li, Muhammad Usman Shoukat, Jianatihan Jinsihan, Benjun Feng, Yi Yang, Fuwu Yan, Shumo He and Yibo Wu
Drones 2026, 10(3), 222; https://doi.org/10.3390/drones10030222 - 21 Mar 2026
Viewed by 548
Abstract
To address the inherent limitations in the robustness of fully autonomous unmanned aerial vehicle (UAV) visual perception and the high cognitive workload associated with manual control, this paper proposes a human-in-the-loop brain–computer interface (BCI) control framework. The system integrates steady-state visual evoked potential [...] Read more.
To address the inherent limitations in the robustness of fully autonomous unmanned aerial vehicle (UAV) visual perception and the high cognitive workload associated with manual control, this paper proposes a human-in-the-loop brain–computer interface (BCI) control framework. The system integrates steady-state visual evoked potential (SSVEP) with deep learning techniques to create a spatio-temporally dynamic interaction paradigm, enabling real-time alignment between visual targets and frequency stimuli. At the perception level, an enhanced YOLOv11 network incorporating partial convolution (PConv) and shape intersection over union (Shape-IoU) loss is developed and coupled with the DeepSort multi-object tracking algorithm. This configuration ensures high-speed execution on edge computing platforms while maintaining stable stimulus coverage over dynamic targets, thus providing a robust visual induction environment for EEG decoding. At the neural decoding level, an enhanced task-discriminant component analysis (TDCA-V) algorithm is introduced to improve signal detection stability within non-stationary flight conditions. Experimental results demonstrate that within the predefined fixation task window, the system achieves 100% success in maintaining target identity (ID). The BCI system achieved an average command recognition accuracy of 91.48% within a 1.0 s time window, with the TDCA-V algorithm significantly outperforming traditional spatial filtering methods in dynamic scenarios. These findings demonstrate the system’s effectiveness in decoupling human cognitive intent from machine execution, providing a robust solution for human–machine collaborative control. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

18 pages, 2996 KB  
Article
A Multimodal Agentic AI Framework for Intuitive Human–Robot Collaboration
by Xiaoyun Liang and Jiannan Cai
Sensors 2026, 26(6), 1958; https://doi.org/10.3390/s26061958 - 20 Mar 2026
Viewed by 542
Abstract
Widespread acceptance of collaborative robots in human-involved scenarios requires accessible and intuitive interfaces for lay workers and non-expert users. Existing interfaces often rely on users to plan and issue low-level commands, necessitating extensive knowledge of robot control. This study proposes a multimodal agentic [...] Read more.
Widespread acceptance of collaborative robots in human-involved scenarios requires accessible and intuitive interfaces for lay workers and non-expert users. Existing interfaces often rely on users to plan and issue low-level commands, necessitating extensive knowledge of robot control. This study proposes a multimodal agentic AI framework integrating natural user interfaces (NUIs) to foster effortless human-like partnerships in human–robot collaboration (HRC), which enhance intuitiveness and operational efficiency. First, it allows users to instruct robots using plain language verbally, coupled with gaze, revealing objects precisely. Second, it offloads users’ workload for robot motion planning by understanding context and reasoning task decomposition. Third, coordinating with AI agents built on large language models (LLMs), the system interprets users’ requests effectively and provides feedback to establish transparent communication. This proof-of-concept study included experiments to demonstrate a practical implementation of the agentic AI framework on a mobile manipulation robot in the collaborative task of human–robot wood assembly. Seven participants were recruited to interact with this AI-integrated agentic robotic system. Task performance and user experience metrics were measured in terms of completion time, intervention rate, NASA TLX survey for workload, and valuable insights of practical applications were summarized through a qualitative analysis. This study highlights the potential of NUIs and agentic AI-embodied robots to overcome existing HRC barriers and contributes to improving HRC intuitiveness and efficiency. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

21 pages, 20926 KB  
Article
Research on Neuro-Acoustic Human–Machine Collaborative Inter-Domain Global Attention Fusion for Underwater Acoustic Target Recognition
by Jiaqi Zhang, Zhangsong Shi, Huihui Xu, Zhe Rao, Songxue Bai and Junfeng Gao
J. Mar. Sci. Eng. 2026, 14(6), 578; https://doi.org/10.3390/jmse14060578 - 20 Mar 2026
Viewed by 223
Abstract
To enhance the adaptability of current underwater acoustic target recognition technology in complex marine environments and improve the performance of human–machine collaborative operations, this study proposes a human–machine collaborative underwater acoustic target recognition technology based on brain–computer interface technology. This method combines synchronized [...] Read more.
To enhance the adaptability of current underwater acoustic target recognition technology in complex marine environments and improve the performance of human–machine collaborative operations, this study proposes a human–machine collaborative underwater acoustic target recognition technology based on brain–computer interface technology. This method combines synchronized underwater acoustic neural features between acoustic signals and human brains to propose an inter-domain global attention fusion module to explore the fusion relationship of features at different depths, and to enhance the joint feature expression ability by combining potential complementary information between modalities. The experimental results show that the proposed network model can enhance the feature discrimination ability and obtain a more stable recognition model. Compared to a single feature, the human–machine collaborative fusion-feature model exhibits stronger classification performance, with an average classification accuracy of 96.4444%. This method can alleviate the limitations of single-mode underwater acoustic target recognition technology, combine the complementary advantages of humans and machines to achieve effective human–machine cooperation, and provide new insights for future underwater recognition technology and marine research. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

58 pages, 7331 KB  
Review
Human–Robot Interaction in Indoor Mobile Robotics: Current State, Interaction Modalities, Applications, and Future Challenges
by Arman Ahmed Khan and Kerstin Thurow
Sensors 2026, 26(6), 1840; https://doi.org/10.3390/s26061840 - 14 Mar 2026
Viewed by 462
Abstract
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as [...] Read more.
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as usability, trust, and social acceptance. Implementation challenges are discussed, encompassing safety, privacy, and regulatory considerations. Representative case studies, including healthcare and domestic platforms, highlight design trade-offs and integration lessons. We identify critical technical challenges, including robust perception, reliable multimodal fusion, navigation in dynamic spaces, and constraints on computation and power. Finally, we outline future directions, including embodied AI, adaptive context-aware interactions, and standards for safety and data protection. This survey aims to guide the development of indoor mobile robots capable of collaborating with humans naturally, safely, and effectively. Full article
Show Figures

Figure 1

Back to TopTop