Human-AI Collaborative Interaction Design: Rethinking Human-Computer Symbiosis in the Age of Intelligent Systems

Special Issue Editor


E-Mail Website
Guest Editor
School of Design, Jiangnan University, Wuxi 214122, China
Interests: human-computer interaction; AI-assisted design; user perception and preference; design research methods; smart services and system usability; digital literacy and ethics

Special Issue Information

Dear Colleagues,

With the rapid advancement of generative AI, adaptive systems, and intelligent interfaces, the paradigm of human-computer interaction is shifting toward a model of greater collaboration between humans and artificial intelligence. This Special Issue focuses on human-AI collaborative interaction design, with the aim of exploring the ways in which design methods, system architectures, and user experiences are being redefined in this new era of human-machine co-evolution.

We welcome research that investigates AI-assisted creativity, user perceptions of algorithmic agency, ethical dimensions of human-AI collaboration, and system transparency. Interdisciplinary approaches that combine design research, behavioral science, and computational methods are particularly encouraged. Topics may include adaptive interfaces, co-creative tools, human-in-the-loop systems, and the role of AI literacy in shaping user empowerment and interaction effectiveness.

Dr. Qianling Jiang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human-AI collaboration
  • co-creation systems
  • AI-augmented design
  • human-in-the-loop interaction
  • adaptive interfaces
  • generative AI
  • algorithmic agency
  • interaction transparency
  • user trust and ethics
  • AI literacy

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 1597 KB  
Article
Cognitive Workload Assessment in Aerospace Scenarios: A Cross-Modal Transformer Framework for Multimodal Physiological Signal Fusion
by Pengbo Wang, Hongxi Wang and Heming Zhang
Multimodal Technol. Interact. 2025, 9(9), 89; https://doi.org/10.3390/mti9090089 - 26 Aug 2025
Viewed by 576
Abstract
In the field of cognitive workload assessment for aerospace training, existing methods exhibit significant limitations in unimodal feature extraction and in leveraging complementary synergy among multimodal signals, while current fusion paradigms struggle to effectively capture nonlinear dynamic coupling characteristics across modalities. This study [...] Read more.
In the field of cognitive workload assessment for aerospace training, existing methods exhibit significant limitations in unimodal feature extraction and in leveraging complementary synergy among multimodal signals, while current fusion paradigms struggle to effectively capture nonlinear dynamic coupling characteristics across modalities. This study proposes DST-Net (Cross-Modal Downsampling Transformer Network), which synergistically integrates pilots’ multimodal physiological signals (electromyography, electrooculography, electrodermal activity) with flight dynamics data through an Anti-Aliasing and Average Pooling LSTM (AAL-LSTM) data fusion strategy combined with cross-modal attention mechanisms. Evaluation on the “CogPilot” dataset for flight task difficulty prediction demonstrates that AAL-LSTM achieves substantial performance improvements over existing approaches (AUC = 0.97, F1 Score = 94.55). Given the dataset’s frequent sensor data missingness, the study further enhances simulated flight experiments. By incorporating eye-tracking features via cross-modal attention mechanisms, the upgraded DST-Net framework achieves even higher performance (AUC = 0.998, F1 Score = 97.95) and reduces the root mean square error (RMSE) of cumulative flight error prediction to 1750. These advancements provide critical support for safety-critical aviation training systems. Full article
Show Figures

Figure 1

25 pages, 19135 KB  
Article
Development of a Multi-Platform AI-Based Software Interface for the Accompaniment of Children
by Isaac León, Camila Reyes, Iesus Davila, Bryan Puruncajas, Dennys Paillacho, Nayeth Solorzano, Marcelo Fajardo-Pruna, Hyungpil Moon and Francisco Yumbla
Multimodal Technol. Interact. 2025, 9(9), 88; https://doi.org/10.3390/mti9090088 - 26 Aug 2025
Viewed by 725
Abstract
The absence of parental presence has a direct impact on the emotional stability and social routines of children, especially during extended periods of separation from their family environment, as in the case of daycare centers, hospitals, or when they remain alone at home. [...] Read more.
The absence of parental presence has a direct impact on the emotional stability and social routines of children, especially during extended periods of separation from their family environment, as in the case of daycare centers, hospitals, or when they remain alone at home. At the same time, the technology currently available to provide emotional support in these contexts remains limited. In response to the growing need for emotional support and companionship in child care, this project proposes the development of a multi-platform software architecture based on artificial intelligence (AI), designed to be integrated into humanoid robots that assist children between the ages of 6 and 14. The system enables daily verbal and non-verbal interactions intended to foster a sense of presence and personalized connection through conversations, games, and empathetic gestures. Built on the Robot Operating System (ROS), the software incorporates modular components for voice command processing, real-time facial expression generation, and joint movement control. These modules allow the robot to hold natural conversations, display dynamic facial expressions on its LCD (Liquid Crystal Display) screen, and synchronize gestures with spoken responses. Additionally, a graphical interface enhances the coherence between dialogue and movement, thereby improving the quality of human–robot interaction. Initial evaluations conducted in controlled environments assessed the system’s fluency, responsiveness, and expressive behavior. Subsequently, it was implemented in a pediatric hospital in Guayaquil, Ecuador, where it accompanied children during their recovery. It was observed that this type of artificial intelligence-based software, can significantly enhance the experience of children, opening promising opportunities for its application in clinical, educational, recreational, and other child-centered settings. Full article
Show Figures

Graphical abstract

14 pages, 412 KB  
Article
Do Novices Struggle with AI Web Design? An Eye-Tracking Study of Full-Site Generation Tools
by Chen Chu, Jianan Zhao and Zhanxun Dong
Multimodal Technol. Interact. 2025, 9(9), 85; https://doi.org/10.3390/mti9090085 - 22 Aug 2025
Viewed by 603
Abstract
AI-powered full-site web generation tools promise to democratize website creation for novice users. However, their actual usability and accessibility for novice users remain insufficiently studied. This study examines interaction barriers faced by novice users when using Wix ADI to complete three tasks: Task [...] Read more.
AI-powered full-site web generation tools promise to democratize website creation for novice users. However, their actual usability and accessibility for novice users remain insufficiently studied. This study examines interaction barriers faced by novice users when using Wix ADI to complete three tasks: Task 1 (onboarding), Task 2 (template customization), and Task 3 (product page creation). Twelve participants with no web design background were recruited to perform these tasks while their behavior was recorded via screen capture and eye-tracking (Tobii Glasses 2), supplemented by post-task interviews. Task completion rates declined significantly in Task 2 (66.67%) and 3 (33.33%). Help-seeking behaviors increased significantly, particularly during template customization and product page creation. Eye-tracking data indicated elevated cognitive load in later tasks, with fixation count and saccade count peaking in Task 2 and pupil diameter peaking in Task 3. Qualitative feedback identified core challenges such as interface ambiguity, limited transparency in AI control, and disrupted task logic. These findings reveal a gap between AI tool affordances and novice user needs, underscoring the importance of interface clarity, editable transparency, and adaptive guidance. As full-site generators increasingly target general users, lowering barriers for novice audiences is essential for equitable access to web creation. Full article
Show Figures

Figure 1

Back to TopTop