Crowdsourcing, Human-AI Interaction, and the Future of Digital Platforms

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: 31 July 2026 | Viewed by 1019

Special Issue Editors


E-Mail Website
Guest Editor
Tércio Pacitti Institute of Computer Applications and Research (NCE), Federal University of Rio de Janeiro, Rio de Janeiro 21941-916, Brazil
Interests: human-computer interaction; CSCW; crowdsourcing; databases and decision support systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Toronto Metropolitan University (TMU), Toronto, ON M5B 2K3, Canada
Interests: autonomous systems and adaptive AI; human-AI interaction; contextual LLMs; context-aware computing

Special Issue Information

Dear Colleagues,

The continuing development of artificial intelligence (AI) has profoundly reshaped the landscape of digital work, social communication, and human-computer interaction, and, as AI systems move beyond laboratories into daily life, particularly across large-scale platforms for data creation (crowdsourcing) and information consumption (digital media), a critical need to understand the complex socio-technical dynamics, ethical implications, and practical design principles that govern these interactions has become apparent. The potential for the misalignment of AI systems and the emergence of destructive algorithmic behaviors, such as algorithmic sociopathy, further underscore the urgency of this interdisciplinary inquiry. In response to this need, this Special Issue aims to serve as a vital forum for cutting-edge interdisciplinary research. In alignment with global principles of equity and inclusive design, this Special Issue also invites contributions that critically examine AI approaches that are accessibility- and disability-inclusive. As AI systems increasingly mediate communication, work, and social participation, it is essential to understand how these technologies either support or hinder access for people with disabilities, including sensory, cognitive, physical, and neurodiverse communities. We encourage research that investigates barriers, co-design practices, adaptive interfaces, multimodal access, and the broader socio-technical implications of deploying AI in contexts where accessibility, safety, and autonomy are paramount. We especially welcome interdisciplinary work that integrates insights from disability studies, accessible computing, universal design, and inclusive HCI, ensuring that the next generation of AI-driven digital platforms fosters equitable participation for all.

This Special Issue will focus on the convergence and mutual influence of AI, crowdsourcing, human–AI interaction (H-AI), digital media, and accessible solutions to the aforementioned emerging challenges. We aim to investigate how AI-driven technologies are shaped by human collaboration and digital environments, examining key issues such as alignment, fairness, transparency, governance, accessibility, and user experience at scale.

We invite high-quality submissions that address the theoretical foundations, empirical studies, and innovative systems spanning these interconnected domains. We particularly welcome interdisciplinary research that bridges technical development with societal, legal, or ethical considerations, offering novel insights into the future of human-AI collaboration in the digital age.

Areas of interest include, but are not limited to, the following topcis:

  • New models and architectures of AI-augmented crowdsourcing (human-in-the-loop, AI-in-the-loop, and AI-sourcing);
  • Ethical and labor implications of using AI in crowdwork platforms (eg, the "gig economy");
  • The role of crowdsourcing in the evaluation and validation of AI models (LLMs and generative AI);
  • Trust, transparency, and explainability (XAI) in AI systems interacting with humans;
  • Design of interfaces and interaction models for conversational systems and generative AI;
  • Studies on the cognitive, emotional, and psychological impacts of human–AI collaboration;
  • Design and evaluation of ethical AI nudges to promote positive behaviors and mitigate biases in digital platforms;
  • Detection, mitigation, and dissemination of misinformation, misaligned AI, and malicious algorithmic behaviors (for example, algorithmic sociopathy) in digital platforms;
  • The use of AI for sentiment analysis and social behavior analysis in large-scale social networks;
  • AI in content creation and curation (algorithmic journalism, generative art, and media personalization);
  • Implications of privacy, surveillance, and algorithmic bias in AI-controlled digital media platforms;
  • Frameworks of AI governance and regulatory structures addressing the distributed nature of crowdsourcing and the rapid evolution of digital media;
  • Development of metrics and methodologies for assessing the effectiveness and fairness of AI systems in complex socio-technical contexts;
  • Study and prevention of misaligned AI (AI alignment) and safety failures in crowdsourcing and digital media systems that may lead to sociopathic or psychopathic misaligned outcomes;
  • Accessibility and inclusive design principles in AI-enabled systems and digital media platforms;
  • AI-driven adaptive interfaces supporting diverse sensory, cognitive, and physical access needs;
  • Co-design and participatory methods with disabled users for developing human–AI interaction models;
  • Algorithmic accessibility—assessing barriers, affordances, and inequities created by AI systems;
  • Risks of algorithmic exclusion and amplification of ableist bias in crowdsourcing, content moderation, and media personalization;
  • AI for assistive technologies and augmented communication in digital and crowdwork environments;
  • Safety, privacy, and autonomy considerations for disabled individuals interacting with AI systems;
  • Accessibility metrics and evaluation methodologies for AI-driven platforms, tools, and digital ecosystems.

We welcome original research articles, surveys, case studies, and critical reviews that contribute to advancing the field of interdisciplinary AI systems, particularly at the intersection of human collaboration, digital platforms, and algorithmic governance.

We look forward to receiving your contributions.

Prof. Dr. Daniel Schneider
Dr. Glaucia Melo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Publisher’s Notice

The Special Issue has been shifted from Section Evolutionary Algorithms and Machine Learning to Section Algorithms for Multidisciplinary Applications on 17 December 2025. At the time of the move, there were no publications in this Special Issue.

Keywords

  • artificial intelligence (AI)
  • crowdsourcing
  • human–AI interaction (H-AI)
  • digital media
  • algorithmic governance
  • AI alignment (misaligned AI)
  • algorithmic bias
  • explainable AI (XAI)
  • AI nudges
  • gig economy
  • misinformation/disinformation
  • generative AI (GenAI)
  • human-in-the-loop (HITL)
  • accessibility
  • inclusive design/universal design
  • disability studies
  • assistive technologies
  • algorithmic ableism
  • accessible HCI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

33 pages, 3164 KB  
Article
Co-Creation by Human–AI Sophimatics Framework and Applications
by Gerardo Iovane and Giovanni Iovane
Algorithms 2026, 19(3), 175; https://doi.org/10.3390/a19030175 - 26 Feb 2026
Viewed by 583
Abstract
Phase 6 of the Sophimatics framework represents the culmination of a comprehensive research program integrating philosophical wisdom with computational sophistication to address fundamental challenges in artificial intelligence systems. Building upon the Complex-Time Recursive Model established in Phase 5, this phase introduces a human-in-the-loop [...] Read more.
Phase 6 of the Sophimatics framework represents the culmination of a comprehensive research program integrating philosophical wisdom with computational sophistication to address fundamental challenges in artificial intelligence systems. Building upon the Complex-Time Recursive Model established in Phase 5, this phase introduces a human-in-the-loop iterative refinement methodology specifically designed for security-critical applications. Through systematic validation across real-world cybersecurity datasets, including NSL-KDD and CICIDS2017, alongside healthcare privacy scenarios using MIMIC-III derived data, we demonstrate that collaborative human–AI co-creation significantly enhances system performance across multiple dimensions, including interpretive accuracy, contextual fidelity, and ethical consistency. The proposed architecture implements three complementary feedback mechanisms: symbolic knowledge base refinement through expert-provided ontological corrections, neural parameter optimization guided by human evaluation of ethical alignment, and dynamic weight adjustment for value-system integration. Experimental results show substantial improvements over baseline approaches, with intrusion detection accuracy reaching 98.7% on NSL-KDD while maintaining 94.3% privacy preservation scores as measured by differential privacy guarantees. The healthcare privacy experiments demonstrate 97.2% sensitive attribute protection with only 2.1% utility loss compared to non-private baselines. Critical analysis reveals that human oversight mechanisms reduce false positive rates in ethical constraint violations by 67% compared to purely automated systems, while convergence analysis indicates stable performance after approximately 12–15 iterations across diverse application domains. These findings establish Phase 6 as an essential bridge between theoretical Sophimatics foundations and practical deployment in privacy-sensitive contexts, demonstrating that philosophically grounded AI architectures can achieve superior performance when augmented with structured human feedback loops. The work contributes both methodological innovations in human–AI collaboration and empirical validation, demonstrating the viability of Sophimatics principles for addressing contemporary challenges in data protection and cybersecurity. Full article
Show Figures

Figure 1

Back to TopTop