Next Article in Journal
Ctrl + Alt + Remedy? Child Rights, Access to Justice and Preventive Responses to Cyberbullying in the European Union
Next Article in Special Issue
Mapping InMeDiT Capital: A Conceptual Framework for Post-Digital Families in a Gaseous Society
Previous Article in Journal
Exploring Digital Competence in Foreign Language Education: An Integrated SELFIE and SELFIE for TEACHERS Study of Bulgarian Secondary School Teachers
Previous Article in Special Issue
Artificial Truth: Algorithmic Power, Epistemic Authority, and the Crisis of Democratic Knowledge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inclusive AI-Enhanced Civic Engagement: Empowering Marginalized Voices

1
Department of Psychology, University of Graz, Universitaetsplatz 2, 8010 Graz, Austria
2
Raising the Floor, 1210 Brussels, Belgium
*
Author to whom correspondence should be addressed.
Societies 2026, 16(4), 115; https://doi.org/10.3390/soc16040115
Submission received: 23 January 2026 / Revised: 15 March 2026 / Accepted: 24 March 2026 / Published: 30 March 2026

Abstract

Civic online participation platforms offer valuable opportunities to involve citizens in local governance and benefit from collective intelligence. Yet, vulnerable groups, such as older adults, people with disabilities, and the less educated, are often underrepresented in online political engagement. Aiming to empower these citizens to raise their voice online, we conducted two studies using an inclusive-by-design approach for developing an online civic engagement platform. In the first study, 39 individuals from two digitally low-performing European countries were surveyed about functions and features that would motivate and support their online participation. In the second study, focus groups with 13 digital and AI experts identified technical and informational requirements for effective use of the features desired by citizens. Our findings show the wishes for accessible, unbiased and secure AI-driven civic engagement platforms with transparency and user education about AI tools. In particular, chatbots require clear disclaimers and user guidance. Once citizens have been involved in the technical design process, both technical and informational feature preferences must be taken into account to avoid access and usability barriers or misunderstandings during the platform’s use. This is important to facilitate participation, especially for citizens from vulnerable groups.

1. Introduction

Not all individuals’ voices are heard equally in the political discourse. While the well-educated, middle-aged, and affluent are consistently overrepresented in political participation [1], citizens from socially vulnerable groups are often unheard by political decision-makers [2,3,4]. In this study, ‘citizens from vulnerable groups’ are defined in alignment with the socio-political framework of marginalization, being at risk of harm or exclusion due to social, cognitive, physical or emotional reasons [5], which can manifest in various forms with negative consequences. Citizens belonging to vulnerable groups, including marginalized citizens—those ‘relegated to an unimportant or powerless position within a society or group’1—face societal barriers and limited resources, being more susceptible to exclusion [6].
Digital civic engagement platforms have potential to counteract unequal representation as they broaden access to participation by lowering practical constraints [7]. Such platforms allow for participation regardless of location or time, therefore particularly benefiting citizens with mobility constraints, or those in rural areas. While the aim is to foster participatory democracy [8], their effectiveness depends on various enabling factors, including digital access, supportive social networks, and the public administration’s ability to provide timely and meaningful feedback.
Participation via digital channels is also shaped by demographic variables and patterns of internet use: younger, digitally socialized citizens (“digital natives”) remain the primary users, whereas older adults and less tech-savvy populations are at risk of being left behind [9]. If these groups cannot effectively engage with digital public services and participation tools, their political voices may be further marginalized rather than amplified.
In some European countries, digital literacy is particularly low, which poses a greater risk for many people of being excluded from digital solutions to civic participation. Among the 27 EU Member States, Romania and Slovakia rank among the lowest in digital performance, according to the Digital Economy and Society Index (DESI) 2022 [10], with only around a third of Romanians and just over half of Slovaks having basic digital skills (e.g., using devices and software, or browsing the web). In such contexts, expanding civic engagement through digital channels may inadvertently widen participation gaps unless inclusion is treated as a core design requirement rather than a secondary consideration.
At the same time, Artificial Intelligence (AI) systems are increasingly integrated into public-sector interfaces (e.g., chatbots, automated decision support, or other service tools), which introduces an additional equity challenge. Biases inherent in AI systems can exacerbate existing inequalities and differentially affect marginalized groups, including people with disabilities, LGBTQIA+ individuals, and ethnic minorities. For instance, linguistic exclusion occurs when speech-to-text or translation tools exhibit reduced accuracy for under-resourced languages or regional dialects, and thus potentially increasing the barriers to accessing information provided by AI. Furthermore, AI systems trained on skewed datasets can reinforce societal stereotypes, effectively automating discrimination against marginalized groups [11].
Given this duality, the central question is not whether digital participation is technically feasible, but how technology can be leveraged to actively reduce barriers while avoiding new forms of discrimination.
Indeed, AI-based systems have the potential to enhance inclusivity through language simplification, translation [12] and accessibility features [13], which should be exploited. The critical challenge, therefore, is to determine how the potential benefits of technological developments, especially, AI applications, can be harnessed to support meaningful participation for vulnerable groups in digital civic engagement and public service contexts.
However, this integration cannot succeed if citizens from vulnerable groups are treated as passive objects of external top-down decisions. Traditionally, a clear line is drawn between those who develop technologies and those who are supposed to use them. In particular, designers, namely academics and companies that adapt existing commercial products, often dominate the development and coordination of civic tech tools. Consequently, governments, organizations, communities and citizens are treated merely as end-users [14]. This can result in a disregard for, and misinterpretations of, technical preferences that would facilitate online engagement, especially for marginalized users.
Instead, it is essential that citizens from different backgrounds are actively involved in early phases of technological development by participating in co-creation platforms. Such co-design methodologies that move away from paternalistic models of technology development allow that feature preferences are directly informed by the lived experiences of citizens from vulnerable groups. The present research is situated within the EU-funded project ITHACA (‘Artificial Intelligence to enhance civic participation’), which operationalizes this approach through an inclusive-by-design strategy. By involving citizens from Brașov (Romania) and Martin (Slovakia) throughout the design and development lifecycle, the project seeks to increase AI’s positive impacts on democratic processes by aligning technical functionalities with user-identified barriers [15].
Building on previous research findings that showed an overrepresentation of privileged groups on civic engagement platforms e.g., [16,17], we set our research goal to find out what digital spaces need to provide to reach frequently underrepresented groups. Approaches to this are offered by Rosello et al. [18], who identified seven design principles that AI-driven civic engagement platforms could use to account for social context and citizen empowerment. The researchers provide a framework to address the problem that current designs are often forced onto users from the top down, failing to give citizens balanced information, which they would need to fairly participate in complex debates [18]. This research provides a significant theoretical foundation, but it moves straight to architecture while presupposing that all targeted citizens feel ready to use digital participation platforms and that they possess the baseline competency required for to use these AI tools in an informed manner. As a consequence, it remains still unclear which AI tools (e.g., chatbots, recommendation systems, speech-to-text transformers) are best suited to address the challenges often faced by citizens from disadvantaged groups [19].
Rosenberg [19] provides insights into usability issues, security concerns, and digital literacy gaps for older adults when it comes to online civic engagement, identifying key factors that influence older adults’ use of e-government services. His findings suggest that confidence in their own online abilities and the security of websites are key factors in whether older people, representing a large vulnerable group in the digital age, use digital government platforms.
Beyond older adults, other individuals belonging to vulnerable groups (e.g., individuals with disabilities or language barriers) may face additional needs that remain unaddressed. Liu et al. found that most digital inclusion research focuses on single-dimension vulnerabilities, with fewer than 10% addressing multiple disadvantages [20]. To bridge this gap, the present study applies an intersectionally informed sampling approach, prioritizing participants with multiple vulnerabilities, as detailed in the methods section.
To design an inclusive online platform that goes beyond basic accessibility, this research integrates user-centered principles with specific accessibility requirements to enhance the experience for all users—especially those at risk of digital exclusion [21]. Consequently, the first research question addressed in study 1 is: (1) “Which platform features and AI-enabled supports do citizens from vulnerable groups prefer and perceive as helpful for overcoming participation barriers?”.
This first study focuses on the “what”: engaging citizens from vulnerable groups through co-design workshops to identify the platform features and AI-enabled supports they perceive as vital for overcoming participation barriers. By prioritizing desired features over technical constraints at this stage, we capture raw preferences without confronting them with AI risk-assessment and technical shortcomings to avoid cognitive overload among citizens lacking technical expertise. However, identifying desired features is only the first step toward overcoming discrimination and barriers to participation for citizens from vulnerable groups. Because the general public often lacks foundational knowledge regarding the trade-offs and privacy risks of generative AI [22], these user-defined “wishes” must be filtered through a lens of safety and usability. Otherwise, the consequences of misunderstanding generative AI and uninformed interaction can range from deception and insufficient trust to excessive reliance and threats to privacy and security [23,24].
Consequently, in a subsequent study, we focus on the “how”. By addressing the second research question, (2) “What technical considerations and explanatory information are necessary for the required technical features to be used in a beneficial and supporting manner by users without technical background?”, we aim to translate the wishes and preferences from study 1 into technical, informative and ethical requirements, in order for implemented features to be useful and improve accessibility [12].
To address our first research questions, we recruited citizens from vulnerable groups in two European cities with low rankings in digital skills and infrastructure (DESI) [10] to co-design an inclusive online civic engagement platform and surveyed the citizens regarding technical preferences and wishes in the first study. Subsequently, in the second study, we conducted focus groups with experts in digital technologies and AI to critically evaluate the technical feasibility, ethical implications, and necessary information and instructions for the features identified by citizens.
This two-step approach ensures that the resulting platform design is based on the direct wishes of disadvantaged groups while remaining grounded in technical reality and security. The conceptual novelty of this work lies in the application of an intersectionally informed sampling approach to vulnerability, which prioritizes participants with multiple, overlapping disadvantages in under-represented Eastern European contexts (Romania and Slovakia). By shifting the focus from generic digital inclusion to the specific AI-assisted needs of marginalized populations in low-digital performing countries, this work bridges the “developer-user” gap through two complementary studies.

2. Study 1: Citizen Requirements and Feature Preferences

To answer research question 1, the following study captures participants’ expressed preferences at a high level of abstraction, articulate in a way interpretable for citizens with moderate to very low levels of technical competences.

2.1. Methods

The preference analysis of potential end users took place within Citizens’ Juries in Martin (Slovakia) and Brașov (Romania). Citizens’ Juries are a method to enhance decision-making by involving a small group of representative citizens to deliberate on policy questions [25].

2.1.1. Participants

Participants were recruited for the Citizens’ Jury workshops using an inclusive and privacy-sensitive strategy designed to aim for fair representation, particularly of vulnerable groups. A structured participation form, which is openly available2 and addressing background, living conditions, and marginalized status was developed for participant selection, adhering to data minimization principles. Questions were formulated in accessible language, predominantly using multiple-choice formats, and were accompanied by clear background and data protection information. For nearly all items, participants could choose not to answer, thereby reducing pressure to disclose sensitive information and avoiding exclusion based on privacy concerns. To account for discrepancies between external categorizations and self-perception, the form combined objective socio-demographic indicators with subjective assessments of social and economic circumstances, including perceived discrimination. It was provided in multiple languages and formats, with assistance offered to participants facing literacy or accessibility barriers to ensure equitable access. Recruitment dissemination followed a multi-channel approach, including municipal offices, public advertising, and collaboration with trusted community and social organizations, using snowball sampling to reach underrepresented individuals. Although participants received financial compensation, this was intentionally disclosed only shortly before the workshops to avoid undue inducement and to prioritize intrinsic civic motivation. To select a maximum of 50 participants, a “vulnerability score” was calculated for each applicant by assigning weights (ranging from 0 to 1) to specific variables in the participation form, where higher weights reflected greater levels of vulnerability. For example, “unemployed” status was weighted more heavily than “student” status, and lack of access to basic infrastructure like hot water or electricity received high prioritization. The scoring system accounted for complex intersections, such as low educational level combined with limited disposable income. This methodology allowed for a systematic ranking of candidates based on their self-reported and inferred vulnerability characteristics. The assigned weights to variables and characteristics are provided in Appendix A, Table A1 and Table A2.
The final sample (see Table 1) included 39 participants (69% female), aged between 18 and 82 years (31% ≥60 years), from Brasov (Romania) and Martin (Slovakia), each experiencing at least one form of vulnerability. These vulnerabilities encompassed cognitive or physical disabilities, financial and social barriers, limited access to infrastructure and basic housing, and (self-attributed) belonging to at least one of the minority categories (ethnic, linguistic, sexual or other minority) within their cities. Reported technology use, indicated that 26 participants accessed the internet daily, 11 used it occasionally, and two had no prior experience with the internet.

2.1.2. Material and Procedure

As part of the citizens’ Juries, citizens participated in an introductory workshop to get an overview of online needs and preferences and AI, as well as its deployment, potential advantages and threats for civic engagement. For the preference analysis, a questionnaire, including open and closed questions about their desired and unwanted features for the platform, was used. Participants did not interact with functional prototypes; the instrument used short, plain-language feature descriptions as a low-fidelity elicitation approach suited to early-stage co-design and heterogeneous digital skill levels. To improve validity of the functions and descriptions, the survey underwent multiple iterative feedback loops within the interdisciplinary project team, including experts for inclusive designs, as well as technical experts. The survey had three sections: (1) a list of 19 features with short descriptions, (2) post and comment interaction features, including five possibilities of Reacting with emojis, Rating, Upvoting and Downvoting posts, Upvoting only and Commenting, and (3) functionalities to structure the platform with options like a Search bar, filter options, subscription/Abonnements and Email/Profile Notifications (the questionnaire is included in the OSF). Participants had to state for each of them whether they (i) would like to have it, (ii) could not imagine it, or (iii) did not want it. The 19-feature set was constructed as a structured stimulus list to elicit preferences from participants with heterogeneous vulnerabilities. Features were compiled from a review of commonly implemented civic-participation platform functions, as well as prior work on digital exclusion mechanisms affecting vulnerable groups (e.g., language barriers, literacy and comprehension barriers, accessibility needs, safety/trust concerns, and navigation challenges). The intent was not to claim exhaustiveness, but to ensure coverage of barrier-relevant design levers that can later be refined through iterative co-design. To avoid jargon and reduce cognitive burden, each feature was described in plain language and framed as a concrete platform support (e.g., translation, simplification, spam/phishing detection). For example, the feature ‘Recommendation’ was described as ‘Recommends topics or posts that might interest you, based on other posts/topics you liked or commented on’. In addition, an open answer field for additional features and ideas remarked by participants was provided in each section.

2.2. Results

2.2.1. User Preferences—Safety and Accessibility over Entertainment

The preference analysis revealed a hierarchy of preferences among the surveyed citizens. Rather than viewing all digital features as equally desirable, participants prioritized functionalities that ensure safety and accessibility, showing significantly more ambivalence toward features associated with commercial social media, such as gamification or passive automation.

2.2.2. Core Dimensions: Safety and Accessibility as Prerequisites

As demonstrated in Figure 1, the strongest consensus emerged around features that mitigate risk and lower barriers to entry. Safety was the paramount concern, with Security measures being the single most requested feature (97% agreement). This desire for protection extended to the discursive environment, with high preferences for Toxicity sensors and Spam/Phishing detection (84%). In parallel, features addressing specific vulnerabilities, specifically language and cognitive barriers, were rated as essential. Text translation (87%) and Simplification into “simpler language” (84%) were among the top five priorities. This clustering suggests that for vulnerable groups, digital civic engagement is not merely about having a platform, but about ensuring the platform is both safe to inhabit and intelligible to use.

2.2.3. Social Interaction vs. Entertainment

While the platform is intended for engagement, participants differentiated between functional interaction and “social media style” entertainment, as can be seen in Figure 2. Regarding functional interaction, there was solid support for Commenting (64%) and Reacting with emojis (54%), which facilitate direct expression: Features typically used to drive engagement on commercial platforms were the least desired. Gamification (48%) and Social Media Integration (64%) ranked at the bottom of the preference list.
Figure 1. Results of desired features and functionalities.
Figure 1. Results of desired features and functionalities.
Societies 16 00115 g001
Figure 2. Results of desired interaction features.
Figure 2. Results of desired interaction features.
Societies 16 00115 g002

2.2.4. Information Control: Active Seeking vs. Passive Consumption

Participants demonstrated a strong preference for maintaining control over their information environment (see Figure 3). Tools that allow active navigation, such as the Search bar (87%) and Filter options (74%), were highly favored. In contrast, when participants were asked what functionalities they would like to have, they were not as keen on features that automate organization or interaction, such as Auto-tagging (64%) and Sentiment analysis (51%) (see Figure 1). This suggests that while citizens prefer AI to assist them (e.g., via translation or toxicity detection), they are more cautious about AI systems that interpret or categorize their content without their direct input.

3. Study 2: Expert Recommendations and Technical Requirements

To answer research question 2, this study serves a complementary role by translating the user-articulated feature preferences elicited in study 1 into concrete technical conditions and informational requirements, drawing on AI expert knowledge in AI ethics.

3.1. Methods

3.1.1. Participants

Workshop participants were recruited through direct email requests and selected based on specific criteria, including their affiliation with data labs, technical universities or universities of applied sciences in fields such as computer science, data science, AI and other fields with a focus on legal, ethical, or social aspects of AI. The first workshop included eight participants: a psychologist specializing in AI competencies, a humanities expert in human–computer interaction for elderly women, an IT expert in computer vision, a technical AI expert focused on human–machine interaction and lifelong learning, an AI expert on methods and algorithms, and two IT experts in virtual learning environments and ethics-by-design. The second workshop featured six participants: a design and human–computer interaction expert, a psychologist with expertise in data science, an AI education expert, an AI ethics and humanities expert, an AI and IT expert on interpreting machine learning, and an ethics expert in digital and AI ethics. A psychologist in each workshop served as its moderator.

3.1.2. Material and Procedure

Instead of one large focus group, we conducted two smaller virtual focus groups to facilitate discussion despite the virtual setting and ensure that all experts had a chance to speak. These were semi-structured and guided by a PowerPoint presentation. Participants were provided with results from the citizens’ preference analysis and asked to discuss and suggest (i) technical considerations and recommendations and (ii) required information users should receive to enable citizens to use highly demanded tools beneficially.

3.1.3. Data Analysis

The recordings of the discussions were transcribed verbatim and analyzed iteratively through repeated reading and note taking. Framework analysis see [26] was used to analyze the transcriptions, as it is a suitable method to systematically identify, describe, and interpret the phenomenon of interest through five steps. These include (1) data familiarization, (2) identifying a thematic framework, (3) indexing all study data against the framework, (4) charting to summarize the indexed data, and (5) mapping and interpretation of patterns found within the chart.
First, data familiarization was achieved through repeated reading of the transcripts and production of analytic memos to capture initial impressions, recurrent and salient topics across expert discussions. Second, an initial thematic framework was developed deductively based on the study aims and discussion guide, and inductively refined through close engagement with the data. This framework specified both content-related and normative dimensions relevant to the evaluation of AI tools.
Third, during the indexing phase, the transcripts were systematically coded against the agreed thematic framework by a team of three researchers. To ensure the reliability and validity of the findings, any disagreements in code assignment or interpretation were resolved through team discussions until a full consensus was reached. Statements were first assigned to the specific AI tool under discussion and then classified into higher-order categories: (i) technical considerations and recommendations or (ii) information provided to users.
Fourth, in the charting phase, indexed data were summarized in analytic matrices to enable cross-case comparison while preserving original meaning. Finally, mapping and interpretation involved identifying patterns and divergences across experts and AI tools.
To maintain an inductive focus on the experts’ own priorities, the mapping onto the seven key requirements for trustworthy AI by the High Level Expert Group on AI [27] was performed after the initial thematic structure had emerged. For a clearer synthesis of the core findings, an additional level of abstraction was applied. Here, the specific recommendations and requirements were grouped into higher-level thematic clusters (e.g., Transparency & Human Oversight, Accessibility & Inclusivity), to provide a more cohesive structural overview and to highlight the conceptual links between the diverse technical findings.
These post hoc alignment served as an analytic lens to examine how the experts’ independent recommendations relate to established ethical principles. Throughout the process, reflexive team discussions were used to critically examine interpretive decisions and ensure coherence between empirical observations and ethical interpretation.

3.2. Results

The main results from the two workshops with experts were categorized into technical considerations and recommendations (see Table 2) and information to be provided to users (see Table 3). These findings address the implementation of various AI-driven features, including Chatbots (Societies 16 00115 i008), Speech-to-text transformers (Societies 16 00115 i012), Spam-filters/Moderation (Societies 16 00115 i009), Interaction/Posting possibilities (Societies 16 00115 i010), Simplification/Summarization/Translation tools (Societies 16 00115 i011) and general requirements applicable to any AI tool (Societies 16 00115 i013).
The interdisciplinary composition of the expert groups—which covered the fields of machine learning, ethics and sociology—led to a discourse that was more complementary than contentious. Rather than encountering fundamental differences in opinion, the analysis yielded a synergistic consensus in which technical recommendations from one field (e.g., technical robustness) were validated and integrated by experts from other fields (e.g., digital inclusion).
To provide a clearer structural overview of these findings, the technical recommendations in Table 2 were organized into four overarching themes:
  • Transparency and Human Oversight
This theme encompasses the recommendations of traceability and user-driven improvements. For simplification and translation tools, experts recommended providing direct links to original source texts to allow for verification. Furthermore, a robust user feedback system is proposed for all AI tools to enable users to report bugs or suggest platform improvements, ensuring ongoing human oversight of the system’s performance.
  • Data Privacy and Fairness
To address fundamental ethical concerns, this cluster focuses on privacy and data handling. For chatbots, ensuring “fair training data” is essential to prevent the reinforcement of societal stereotypes. Regarding privacy, experts suggested the implementation of federated learning for speech-to-text tools. This allows language models to be stored locally on devices to reduce concerns regarding data misuse.
  • Platform Safety and Content Moderation
These recommendations aim to protect the digital environment and its users. Experts highlighted the need for bot detection in chatbots to identify unusual platform behavior and suggested “duplicated voting prevention” for interaction tools, which is specifically relevant for civic participation. In terms of moderation, automatic blurring of sensitive images and the implementation of flagging and collaborative moderation systems that allow users to identify toxic content for subsequent human review, were recommended.
  • Accessibility and Inclusivity
Although all technical recommendations are aimed at the overarching goal of accessibility and inclusivity, this section highlights those recommendations that are specifically designed to ensure barriers are removed for diverse user groups. For speech-to-text tools, this includes features such as fluency enhancement, speech-commanded navigation for users with physical disabilities, and quick selection options for recognized sentence versions. Chatbots are envisioned as “help-bots” that provide direct solutions within the interface. General accessibility requirements include customizable display settings (font, contrast, color) and ensuring compatibility with existing assistive technologies like screen readers.
Table 3 outlines user information requirements, identified by the experts, which were similarly organized into four overarching themes to provide a clearer structural overview of the transparency needs for users (see Table 3).
  • Transparency and Managing Expectations
This theme represents the most extensive category of information requirements, primarily focusing clarifying potential misconceptions of generative AI chatbots. For such, the need to disclose the specific chatbot type (rule-based vs. generative), its data basis, and its probabilistic nature—clarifying that responses are based on likelihood rather than strict logic, were emphasized. To maintain a “critical distance,” it is recommended that developers avoid humanizing the AI and instead use neutral icons. Furthermore, users should be warned about potential hallucinations, misleading answers, and the “black box” nature of these models. For simplification and summarization tools, transparency involves informing users about how a text was modified and warning that context may be lost during the process.
  • Data Governance and Fairness
Information in this category addresses the underlying logic and potential biases of the system. For general AI tools, experts suggested disclosing the “recommendation parameters”, specifically, which user factors (e.g., age, gender, or location) influence the system’s outputs. In terms of linguistic fairness, speech-to-text transformers must include warnings regarding reduced accuracy for rare or less common languages to ensure users are aware of potential performance disparities.
  • Accountability and Human Oversight
The requirements in this cluster are designed to foster a realistic understanding of AI as a human-made, and therefore fallible, technology. Across all tools, there is a recommendation to clearly label AI-generated content and disclose the system’s error-proneness and susceptibility to bias. In the context of moderation and spam filtering, transparency can be achieved by publishing clear posting rules and outlining the specific criteria used for content filtering (e.g., toxic word lists).
  • Accessibility and Inclusivity
Similar to the technical recommendations, the information requirements in this cluster are specifically designed to ensure barriers are removed for diverse user groups, supporting the study’s overarching goal of accessibility and inclusivity. This includes providing “user guidance” with sample questions for chatbots and explaining the specific purpose of a feature to help users understand how it assists them. To prevent user frustration or “self-blame” when errors occur, the experts recommend framing the AI as an “assistive, error-prone assistant” For speech-to-text tools, inclusivity is supported by providing practical advice on clear speech and warnings about dialect recognition, while summarization tools should provide clear metrics, such as the percentage by which a text was shortened.

4. General Discussion

Our first study explored the technical feature preferences for a civic engagement platform of citizens from vulnerable groups. Survey results indicate preferences for a variety of AI functionalities and features, particularly tools enhancing accessibility (e.g., language translation, simplified text, speech-to-text) and security (e.g., toxicity detection, spam/phishing detection, content moderation). These preferences mark a dual focus: secure online use and improved accessibility for people with disabilities or language barriers. These preferences were also reflected in the other suggestions made by citizens in the survey and were prioritized over social and entertainment features, such as collaboration tools or gamification. Results regarding reactions to and structuring of content show that they prefer both familiar features and a sense of control. For example, their preference for upvoting and downvoting reflects what they already know from other social media platforms (e.g., YouTube or Facebook).
Participants also wish for navigation clarity to find information easily. Search and filter functions allow them to determine what they see, rather than having content automatically selected for them. Consequently, clear navigation should be viewed as a vital inclusion tool rather than a simple usability upgrade. For users with low digital confidence, intuitive design mitigates the perceived risk of making errors or misusing the platform. Similarly, transparency regarding AI-driven features is essential for participants with limited technical knowledge or low institutional trust. Without clear communication, opaque automation may lead either to disengagement [28] or to rely either not at all or too heavy on system outputs [24]. Finally, the higher preferences for toxicity and spam detection underline the importance of safety-oriented safeguards in participation settings. Because harassment and hostile interactions disproportionately discourage groups whose civic engagement is already fragile, robust protection is fundamental to maintaining a truly inclusive environment [29].
Additionally, the prominence of preferences such as transparency about AI-supported functionalities, alongside the demand for safety measures or automated toxicity or spam detection, must be interpreted in light of the specific context of our study. Participants were recruited from two European countries with low levels of digitalization and civic infrastructure, and the sample included individuals who faced barriers such as migration or refugee backgrounds, language barriers, previous experiences of social or political exclusion, and, in some cases, limited digital skills. In such circumstances, uncertainty about how the system processes inputs and concerns about hostile interactions, fraud, or other negative consequences can become central participation barriers. Accordingly, participants’ emphasis on safety and security features can be understood as a trust- and risk-mitigation response to a participation environment perceived as comparatively uncertain and potentially unsafe, rather than as a purely generic preference profile.
Furthermore, we noted a varying use of the response category ‘cannot imagine’ across the functions, suggesting that some technical features are more difficult to imagine than others, or were not very familiar to citizens. This applies, for example, to Sentiment Analysis, Subscriptions to topics and Commenting. In contrast to the former feature, which is a newer function, this was surprising in the case of the latter functions. It shows all the more that even functions that are used on many platforms are not self-explanatory for all users, and that an inclusive platform should also make use of additional information and explanations on seemingly well-known features.
Instead of merely repeating general design guidelines, our findings specify which platform functions directly address the exclusion mechanisms reported by citizens from vulnerable groups. Our data supports previous findings, indicating that participation may be hindered by concerns regarding safety, trust, and limited digital or AI literacy [3,10].
Building on the findings of study 1, our second study examined the technical requirements and explanations that AI experts consider necessary for citizens to use these tools effectively. The experts identified several critical steps to ensure that platform-integrated AI remains both trustworthy and ethical. When examined through the lens of the framework for trustworthy AI by the High Level Expert Group [27], the findings demonstrate a strong correspondence with the requirements for human oversight, technical robustness, and accountability.
Regarding information about AI tools for transparent and appropriate use, participating experts emphasized the need for clear explanations of tool functionality, limitations, and privacy policies as well as practical recommendations and tips to counteract misinterpretation or prevent them in the first place.
AI chatbots in particular seem to require the most user guidance, disclaimers and explanations. Such information should include working mechanisms (e.g., differences to web search), purpose and corresponding training data, disclaimers about irreproducibility of answers, potential inaccuracies, influential effects on users, and system opacity. Providing this crucial information can help address unrealistic expectations, misconceptions, and the tendency to anthropomorphize AI.
This information need has already been noted in previous work with several studies reporting a variety of knowledge gaps among AI chatbot users. Pallivathukal et al. [30] and also Kamoun et al. [31] found that many users do not understand ChatGPT’s fundamental limitations, such as its inconsistent accuracy or the fact that responses are not verified for accuracy. Bewersdorff et al. [32] also described common misconceptions among technical laypersons, including a narrow view of AI as only robots or digital assistants, anthropomorphizing AI, seeing AI as entirely good or bad, and misunderstanding technical distinctions between machine learning, neural networks, and deep learning. Beyond disclaimers of AI chatbots, experts advocate communicating the limitations of any tools to help users avoid frustration and self-blame. This includes clarifying the limitations of speech-to-text-tools, especially for less widely spoken languages, labeling AI-generated images, text or videos, and establishing transparent, predefined rules for content moderation, as well as the possibility for users to contest outcomes.

4.1. Strengths and Limitations

This work is characterized by several strengths. Unlike prior technology design research that either excluded citizens from vulnerable groups from participatory design [33] or focused primarily on the technical dimensions of AI transparency e.g., [34], our work foregrounds the role of user education about AI features and thus addresses an underexplored pathway to equitable technology adoption. By successfully recruiting participants from typically hard-to-reach or underserved groups, the study addressed a critical gap in representativeness and included participants from countries characterized by low digital progress and comparatively poor digital performance within Europe. Furthermore, comparing citizens’ preferences with expert recommendations enhances the robustness and practical usefulness of the proposed recommendations.
Nevertheless, important limitations temper the generalizability and scope of our conclusions. Firstly, although a balanced gender distribution was sought when selecting participants, an uneven representation was found in the final sample due to an imbalance in the applications received. Further, the modest sample size constrains statistical inference and the use of a prepared list of functions, features, and response options may have limited participants’ creativity and prevented the identification of unlisted features, for example, the optional field for open-ended responses was not extensively used. Furthermore, the reliance on questionnaire data may have introduced response bias and provides less depth compared to qualitative interviews.
Study 1 was designed as an exploratory “wish list” to build foundational understanding through introductory workshops and plain-language descriptions. This did not take into account potential disadvantages, risks, or value conflicts (e.g., data tracking, algorithmic profiling or errors in automated moderation) associated with some features or functions. This may have resulted in higher approval rates (ceiling effects). With our approach, we prioritized accessibility over complexity, due to the low digital skills. Introducing complex technical tradeoffs at this early stage would have increased cognitive burden and potentially exclude the very voices we aimed to empower. However, the results of the user-centered “wish list”, which served as the basis for expert-led prioritization and risk assessment, were then critically evaluated in study 2 with regard to technical limitations, risks (e.g., hallucinations, data misuse), and ethical requirements.
Future research, however, should build on this by employing ‘Maximum Difference Scaling’ or ‘Choice-Based Conjoint Analysis’ to further refine these requirements under simulated resource constraints. A primary methodological limitation is the inability to provide a granular sub-analysis of technical requirements across specific vulnerability groups. While participants were recruited based on these criteria, we deliberately chose not to collect specific vulnerability data during the workshops to protect participant anonymity and well-being. Especially in the regional contexts of Romania and Slovakia, where certain groups face heightened social risks, it was paramount to provide a safe, trust-based environment. By not requiring participants to disclose the specific nature of their vulnerability on-site or during survey completion, we avoided potential stigmatization or involuntary “outing” within the group setting. While this ethical prioritization precludes mapping specific needs to individual demographics, it was a necessary trade-off to ensure the psychological safety of our respondents. Future research could bridge this gap by employing confidential, one-on-one methodologies to test and validate specific user guidance materials for these distinct target groups. Finally, we acknowledge that the perspectives of citizens (Study 1) and experts (Study 2) represent distinct viewpoints that may not always fully align. While experts often focus on high-level ethical standards and technical regulations, they may not always prioritize the specific, lived anxieties of vulnerable groups, such as fears of fraud or social exclusion. We aimed to bridge these interests by centering the barriers identified by citizens during the expert focus groups. However, future research could further resolve these tensions by conducting joint workshops where citizens and technical experts engage in direct dialog.

4.2. Conclusions

To conclude, our studies highlight the preferences for AI features of citizens from vulnerable groups to facilitate online civic engagement. Inclusive civic engagement begins with involving these groups in the design process and requires further discussion of necessary technical steps and informative user guidance before implementing highly complex technologies. Drawing on the expressed preferences for AI-assisted civic engagement among citizens from traditionally underrepresented and vulnerable groups, as well as from technical requirements and considerations by AI experts, our research provides important design considerations and recommendations. These can serve as a foundation for both engineers, UX designers, government leaders and policy makers to make online platforms more user-centered, accessible, inclusive and useful, especially—but not exclusively—for civic engagement activities.

Author Contributions

Conceptualization, M.S., M.B. and J.S.; Methodology, M.S., M.B.; Validation, E.D.L., O.K.; Formal Analysis, M.S., M.B.; Investigation, E.D.L., O.K., M.S., M.B.; Data Curation, M.S., M.B.; Writing—Original Draft Preparation, M.S.; Writing—Review & Editing, M.S., M.B., J.S., E.D.L., O.K.; Visualization, M.B.; Supervision, D.A.; Project Administration, M.B., D.A.; Funding Acquisition, D.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the European Union, as part of the project Artificial Intelligence to enhance civic participation (ITHACA; Horizon Grant Agreement No. 101094364). The authors acknowledge the open access funding by the University of Graz.

Institutional Review Board Statement

The study protocol was reviewed and approved by (ITHACA; Horizon Grant Agreement No. 101094364). The study was conducted in accordance with the Declaration of Helsinki, and approved an independent ethics committee as part of the EU-funded research project ITHACA (Approval Code: No. 101094364; Approval Date: 30 September 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study prior to participation.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

During the preparation of this manuscript, the authors used DeepL (https://www.deepl.com/) to translate individual words or phrases and ChatGPT 4.0 for linguistic revision. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

Otilia Kocsis and Eva de Lera are affiliated with Raising the Floor (RtF) ASBL, a non-profit organization under Belgian law. The authors declare that the research was conducted in the absence of any commercial or financial relationship that could be construed as a potential conflict of interest. The authors affiliated to the University of Graz also have no conflicts of interests to declare.

Appendix A

Table A1. Assigned weights to variables and characteristics of the variables for the recruitment process of study 1.
Table A1. Assigned weights to variables and characteristics of the variables for the recruitment process of study 1.
Variables & CriteriaBrasov (ro)Martin (svk)
WeightsWeights
Gender
Female0.750.75
Male0.500.50
Divers1.001.00
Age22
18–30 years0.500.50
31–60 years0.500.50
Over 61 years1.001.00
Highest educational level1.51.5
Compulsory education (grade 10 or vocational school)1.001.00
High school (with matura)0.500.50
University degree0.000.00
Do not want to answer0.500.50
Current occupation22
Student0.500.50
Education0.500.50
Full time0.300.30
Part-time0.400.40
Marginal0.500.50
Unemployed0.800.80
Maternity leave0.600.80
Parental leaveN/A0.80
Sick leave0.600.60
Pension0.600.60
Do not want to answer0.500.50
Household type22
Single house0.200.20
Own flat0.200.20
Rented flat0.700.70
Social apartment/shared flat0.800.80
Homeless1.001.00
Assisted living0.950.95
Other0.900.90
Do not want to answer0.500.50
Size of your household
Less than 40 m2
41–60 m2
61–80 m2
More than 80 m2
Do not want to answer
Number of people in your household (open field with integer to be entered only appeared in case of response ‘I live with others’)
I live alone
I live with others
Do not want to answer
Square Meters per person (inferred from the responses of the two items above as well as the number of people in the household).22
>400.000.00
31–400.250.25
21–300.500.50
11–200.750.75
<101.001.00
N/a (not possible to calculate. If not answer to m2 and/or person per household)0.750.75
Number of children in your household1.51.5
00.000.00
10.500.50
20.750.75
31.001.00
3+1.001.00
Number of adult that need personal care in your household22
001
1+10
Infrastructure Yes/no/do not want to answer Yes/no/do not want to answer
Access to internet0.300/1/0.50.300/1/0.5
Mobile telephone owner0.300/1/0.50.300/1/0.5
Access to personal transport (car, e.g.)0.200/1/0.50.200/1/0.5
Access to public transport0.500/1/0.50.500/1/0.5
Access to hot water1.500/1/0.51.500/1/0.5
Able to heat sufficiently2.000/1/0.52.000/1/0.5
Access to electricity2.000/1/0.52.000/1/0.5
I have the following left over each month at my free disposal (for food, leisure activities. clothes…) After deducting all monthly fixed costs which are required for living (such as electricity, heating, rent, repayments, necessary expenses for other people):22
Nothing1.001.00
Less than 200 euros0.800.80
Between 200 and 500 euros0.500.50
More than 500 euros0.000.00
I do not want to answer0.500.50
Personal situation1Yes/no/do not want to answer1Yes/no/do not want to answer
I have physical limitations and/or illnesses1.500/1/0.51.500/1/0.5
I have mental limitations and/or illnesses1.500/1/0.51.500/1/0.5
I am satisfied with my financial situation 0.500/1/0.50.500/1/0.5
I feel satisfied with my social life and environment0.500/1/0.50.500/1/0.5
I feel involved in participating in community affairs0.500/1/0.51.000/1/0.5
I receive the medical support/care I need2.000/1/0.52.000/1/0.5
I have the opportunity to use cultural, social and leisure facilities (e.g., Cinema or restaurant visits, sport activities, etc.)1.000/1/0.50.700/1/0.5
I am satisfied with the standard/quality of my living space0.500/1/0.50.500/1/0.5
Minority affiliation1Yes/no/do not want to answer1Yes/no/do not want to answer
Religious minority0.21/0/0.50.21/0/0.5
Linguistic minority0.51/0/0.50.51/0/0.5
Ethnic minority11/0/0.511/0/0.5
Other minority0.751/0/0.50.751/0/0.5
Because of the affiliation to this minority. i feel discriminated in my daily life22
Totally disagree00
Rather disagree0.50.5
Rather agree0.750.75
Totally agree11
Do not want to answer0.50.5
Filling the questionnaire11
Filled it by myself0.000.00
Filled it by myself with support from others0.750.75
Another person filled it out for me1.001.00
Do not want to answer0.500.50
Motivation to participate in the juries(open answer)(open answer)
Table A2. Assigned weights to inferred variables and characteristics of the variables for the recruitment process of study 1.
Table A2. Assigned weights to inferred variables and characteristics of the variables for the recruitment process of study 1.
Variables & CriteriaBrasov (ro)Martin (svk)
WeightsWeights
Pregnant women OR women with children AND low income1.001.50
Single parents1.001.00
NO access to personal transport AND NO access to public transport1.001.00
Low educational level AND small amount of money left after deduction of fix costs1.001.00
Note. Low educational level has been operationalized as ‘Compulsory education (grade 10 or vocational school)’; Small amount of money/low income has been operationalized as ‘Less than 200 Euros’ (which includes the response ‘Nothing’).

Notes

1.
2.
https://osf.io/nh3qb/files (accessed on 14 March 2026).

References

  1. Davies, J.; Procter, R. Online Platforms of Public Participation—A Deliberative Democracy or a Delusion? (Version 1). arXiv 2020, arXiv:2009.14074. [Google Scholar] [CrossRef]
  2. Guldvik, I.; Askheim, O.P.; Johansen, V. Political citizenship and local political participation for disabled people. Citizsh. Stud. 2013, 17, 76–91. [Google Scholar] [CrossRef]
  3. Raisio, H.; Valkama, K.; Peltola, E. Disability and deliberative democracy: Towards involving the whole human spectrum in public deliberation. Scand. J. Disabil. Res. 2014, 16, 77–97. [Google Scholar] [CrossRef][Green Version]
  4. Skorge, Ø.S. Mobilizing the Underrepresented: Electoral Systems and Gender Inequality in Political Participation. Am. J. Political Sci. 2023, 67, 538–552. [Google Scholar] [CrossRef]
  5. Pérez-Escolar, M.; Canet, F. Research on vulnerable people and digital inclusion: Toward a consolidated taxonomical framework. Univers. Access Inf. Soc. 2023, 22, 1059–1072. [Google Scholar] [CrossRef] [PubMed]
  6. Forbes-Mewett, H.; Nguyen-Trung, K. Defining Vulnerability. In Vulnerability in a Mobile World; Forbes-Mewett, H., Ed.; Emerald Publishing Limited: Leeds, UK, 2019; pp. 5–27. [Google Scholar] [CrossRef]
  7. Shin, B.; Floch, J.; Rask, M.; Bæck, P.; Edgar, C.; Berditchevskaia, A.; Mesure, P.; Branlat, M. A systematic analysis of digital tools for citizen participation. Gov. Inf. Q. 2024, 41, 101954. [Google Scholar] [CrossRef]
  8. Gerbaudo, P. The Digital Party: Political Organisation and Online Democracy; Pluto Press: London, UK, 2018. [Google Scholar]
  9. Secinaro, S.; Brescia, V.; Iannaci, D.; Jonathan, G.M. Does Citizen Involvement Feed on Digital Platforms? Int. J. Public Adm. 2022, 45, 708–725. [Google Scholar] [CrossRef]
  10. European Commission. Digital Economy and Society Index (DESI) 2022; European Commission: Luxembourg, 2022; Available online: https://digital-strategy.ec.europa.eu/en/policies/desi (accessed on 14 March 2026).
  11. Chen, Z. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanit. Soc. Sci. Commun. 2023, 10, 1–12. [Google Scholar] [CrossRef]
  12. Garcia Valencia, O.A.; Thongprayoon, C.; Miao, J.; Suppadungsuk, S.; Krisanapan, P.; Craici, I.M.; Jadlowiec, C.C.; Mao, S.A.; Mao, M.A.; Leeaphorn, N.; et al. Empowering inclusivity: Improving readability of living kidney donation information with ChatGPT. Front. Digit. Health 2024, 6, 1366967. [Google Scholar] [CrossRef]
  13. Yenduri, G.; Kaluri, R.; Rajput, D.S.; Lakshmanna, K.; Gadekallu, T.R.; Mahmud, M.; Brown, D.J. From Assistive Technologies to Metaverse—Technologies in Inclusive Higher Education for Students With Specific Learning Difficulties: A Review. IEEE Access 2023, 11, 64907–64927. [Google Scholar] [CrossRef]
  14. Zhang, W.; Lim, G.; Perrault, S.; Wang, C. A Review of Research on Civic Technology: Definitions, Theories, History and Insights. arXiv 2022, arXiv:2204.11461. [Google Scholar] [CrossRef]
  15. Kocsis, O.; De Lera, E.; Bedek, M.A.; Zangl, M.; Colt, R.; Šaušová, V.; Birasova, L. Facilitators and Barriers of Marginalized Groups for the Engagement in Online Civic Activities. In Electronic Government and the Information Systems Perspective; Kö, A., Kotsis, G., Tjoa, A.M., Khalil, I., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2024; Volume 14913, pp. 105–121. [Google Scholar] [CrossRef]
  16. Mellon, J.; Sjoberg, F.M.; Peixoto, T.; Lueders, J. The haves and the have nots: Civic technologies and the pathways to government responsiveness. Gov. Inf. Q. 2025, 42, 102007. [Google Scholar] [CrossRef]
  17. Spada, P.; Mellon, J.; Tiago, P.; Sjoberg, F.M. Effects of the internet on participation: Study of a public policy referendum in Brazil. J. Inf. Technol. Politics 2016, 13, 187–207. [Google Scholar] [CrossRef]
  18. Rosello, N.B.; Simonofski, A.; Castiaux, A. Artificial intelligence for digital citizen participation: Design principles for a collective intelligence architecture. Gov. Inf. Q. 2025, 42, 102020. [Google Scholar] [CrossRef]
  19. Rosenberg, D. Older adults’ e-government use for bureaucratic and transactional purposes: The role of website-related perceptions and subjective digital skills. Transform. Gov. People Process Policy 2024, 18, 257–269. [Google Scholar] [CrossRef]
  20. Liu, H.; Zhou, Q.; Liang, S. Digital inclusion in public services for vulnerable groups: A systematic review for research themes and goal-action framework from the lens of public service ecosystem theory. Gov. Inf. Q. 2025, 42, 102019. [Google Scholar] [CrossRef]
  21. Buß, R. Inclusive Design–Go Beyond Accessibility. In Human-Computer Interaction. Human Values and Quality of Life; Kurosu, M., Ed.; Springer International Publishing: Copenhagen, Denmark, 2020; Volume 12183, pp. 400–407. [Google Scholar] [CrossRef]
  22. Bianchi, F.; Hovy, D. On the Gap between Adoption and Understanding in NLP. In Findings of the Association for Computational Linguistics: ACL-IJCNLP; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; Volume 2021, pp. 3895–3901. [Google Scholar]
  23. Amaratunga, T. Understanding Large Language Models: Learning Their Underlying Concepts and Technologies; Apress: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  24. Weidinger, L.; Uesato, J.; Rauh, M.; Griffin, C.; Huang, P.-S.; Mellor, J.; Glaese, A.; Cheng, M.; Balle, B.; Kasirzadeh, A.; et al. Taxonomy of Risks posed by Language Models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery: New York, NY, USA, 2022; pp. 214–229. [Google Scholar] [CrossRef]
  25. Stewart, J.; Kendall, E.; Coote, A. (Eds.) Citizens’ Juries; Institute for Public Policy Research: London, UK, 1994. [Google Scholar]
  26. Ritchie, J.; Spencer, L. Qualitative Data Analysis for Applied Policy Research. In Analyzing Qualitative Data; Bryman, A., Burgess, R., Eds.; Routledge: Abingdon, UK, 1994; pp. 305–329. [Google Scholar]
  27. European Commission. High Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI|Shaping Europe’s Digital Future; European Commission: Luxembourg, 2019; Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 14 March 2026).
  28. Smirnov, I.; Oprea, C.; Strohmaier, M. Toxic comments are associated with reduced activity of volunteer editors on Wikipedia. Proc. Natl. Acad. Sci. USA Nexus 2023, 2, pgad385. [Google Scholar] [CrossRef]
  29. Silva, L.; Mondal, M.; Correa, D.; Benevenuto, F.; Weber, I. Analyzing the Targets of Hate in Online Social Media. In Proceedings of the International AAAI Conference on Web and Social Media, Cologne, Germany, 17–20 May 2021; Volume 10, pp. 687–690. [Google Scholar] [CrossRef]
  30. George Pallivathukal, R.; Kyaw Soe, H.H.; Donald, P.M.; Samson, R.S.; Hj Ismail, A.R. ChatGPT for Academic Purposes: Survey Among Undergraduate Healthcare Students in Malaysia. Cureus 2024, 16, e53032. [Google Scholar] [CrossRef]
  31. Kamoun, F.; El Ayeb, W.; Jabri, I.; Sifi, S.; Iqbal, F. Exploring Students’ and Faculty’s Knowledge, Attitudes, and Perceptions Towards ChatGPT: A Cross-Sectional Empirical Study. J. Inf. Technol. Educ. Res. 2024, 23, 004. [Google Scholar] [CrossRef]
  32. Bewersdorff, A.; Zhai, X.; Roberts, J.; Nerdel, C. Myths, mis- and preconceptions of artificial intelligence: A review of the literature. Comput. Educ. Artif. Intell. 2023, 4, 100143. [Google Scholar] [CrossRef]
  33. Fischer, B.; Peine, A.; Östlund, B. The Importance of User Involvement: A Systematic Review of Involving Older Users in Technology Design. Gerontologist 2020, 60, e513–e523. [Google Scholar] [CrossRef]
  34. Ma, H.; McAreavey, K.; McConville, R.; Liu, W. Explainable AI for Non-Experts: Energy Tariff Forecasting. In Proceedings of the 2022 27th International Conference on Automation and Computing (ICAC); IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar] [CrossRef]
Figure 3. Results of desired content organization features.
Figure 3. Results of desired content organization features.
Societies 16 00115 g003
Table 1. Demographics of participants in study 1.
Table 1. Demographics of participants in study 1.
CategoryDemographic VariableAbsolute Frequency
GenderFemale27
Male12
Age Groups (years)18–309
31–6017
>6113
Country of ResidenceSlovakia19
Romania20
Internet ExperienceNo experience at all2
Occasional use11
Daily use26
Table 2. Technical considerations.
Table 2. Technical considerations.
ThemeTechnical RecommendationToolEthical PrincipleDescription
Transparency & Human OversightContext EnsuranceSocieties 16 00115 i011Societies 16 00115 i006Provide links to the original text for simplifications, summarizations, and translations
User Feedback SystemSocieties 16 00115 i013Societies 16 00115 i004Enable users to report bugs or suggest platform/tool improvements
Data Privacy & FairnessFair Training DataSocieties 16 00115 i008Societies 16 00115 i003Ensure unbiased data to avoid reinforcing stereotypes
Federated LearningSocieties 16 00115 i012Societies 16 00115 i007Store language models locally to reduce data misuse concerns (federated learning)
Platform Safety & Content ModerationBot DetectionSocieties 16 00115 i008Societies 16 00115 i002Provide bot detection that identifies unusual platform behavior
Collaborative ModerationSocieties 16 00115 i009Societies 16 00115 i001Let users mark comments as spam for collaborative moderation
BlurringSocieties 16 00115 i009Societies 16 00115 i005Automatically blur sensitive images
FlaggingSocieties 16 00115 i009Societies 16 00115 i004Let users flag toxic messages/images for human review; hide them until resolved
Duplicated Voting PreventionSocieties 16 00115 i010Societies 16 00115 i002Prevent multiple votes from the same user
Accessibility & InclusivityFluencySocieties 16 00115 i012Societies 16 00115 i002Automatically remove hesitation sounds for more fluency
Quick Selection OptionsSocieties 16 00115 i012Societies 16 00115 i003Offer quick selection options—sentence versions recognized by the tool for speech-to-text tools
Speech-Commanded NavigationSocieties 16 00115 i012Societies 16 00115 i003Let speech-to-text handle both chatbot interactions and navigation for users with physical disabilities.
Help-BotSocieties 16 00115 i008Societies 16 00115 i003Provide help and solutions within a chatbot instead of only providing links to other pages
Customizable DisplaySocieties 16 00115 i013Societies 16 00115 i003Allow font, size, contrast, and color adjustments for accessibility
Assistive Tech CompatibilitySocieties 16 00115 i013Societies 16 00115 i003Ensure compatibility with existing assistive technologies (e.g., screen readers)
Legend of ethical principles: Societies 16 00115 i001 = Human agency and oversight; Societies 16 00115 i002 = Technical robustness and safety; Societies 16 00115 i003 = Diversity, non-discrimination, and fairness; Societies 16 00115 i004 = Accountability; Societies 16 00115 i005 = Societal and environmental wellbeing; Societies 16 00115 i006 = Transparency; Societies 16 00115 i007 = Privacy. Legend of concerning tools: Societies 16 00115 i008 = Chatbots; Societies 16 00115 i009 = Spam-filter/Moderation; Societies 16 00115 i010 = Interaction/Posting; Societies 16 00115 i011 = Simplification/Summarization/Translation tools; Societies 16 00115 i012 = Speech-to-text transformers; Societies 16 00115 i013 = Tool-unspecific.
Table 3. Information requirements for users.
Table 3. Information requirements for users.
ThemeInformation RequirementsToolDescription
Transparency & Managing ExpectationsChatbot Type Societies 16 00115 i008Inform if the bot is rule-based (reliable answers) or generative (possible hallucinations).
Distinct Chatbots Names Societies 16 00115 i008Assign distinct names if multiple chatbots are provided (e.g., help, city info) that indicate the function and purpose of the bot.
Chatbot RoleSocieties 16 00115 i008Inform about the chatbot’s main purpose (e.g., city facts, service guidance, form assistance).
Data BasisSocieties 16 00115 i008Explain the data sources behind the chatbot so users understand what it “knows.”
Misleading answersSocieties 16 00115 i008Warn that generative AI can produce incorrect or misleading answers.
SimplificationSocieties 16 00115 i011Inform users how text was simplified (e.g., shorter sentences, simpler words) to build trust.
Varying OutputsSocieties 16 00115 i008Inform if generative AI may give different responses for the same prompt and explain why.
Repeated QuestionsSocieties 16 00115 i008Suggest users ask the same question again to get different answers.
Probabilistic NatureSocieties 16 00115 i008Clarify that AI chatbots generate answers based on likelihood, not strict logic or pre-defined answers.
Differences to Search EnginesSocieties 16 00115 i008Outline how generative AI chatbots differ from standard web search
Black BoxSocieties 16 00115 i008Communicate that even experts do not understand exact mechanisms of AI chatbot’s functionality
Summarization Societies 16 00115 i011Point out that summarized text may lose context
Avoid humanizingSocieties 16 00115 i008Communicate that the bot is not human (e.g., use a robot icon) to maintain critical distance.
Societal ImpactSocieties 16 00115 i008Provide information on how AI chatbots influence society and different sectors
Framing awarenessSocieties 16 00115 i008Raise awareness that chatbot suggestions might limit users’ choices by framing interactions.
Data Governance & FairnessRecommendation ParametersSocieties 16 00115 i013Disclose which (user) factors (e.g., gender, age, location) influence the system’s outputs or decisions
Rare languagesSocieties 16 00115 i012Warn of reduced accuracy in less common languages (and even some mainstream ones).
Accountability & Human OversightModeration RulesSocieties 16 00115 i009Publish clear posting rules and note any delay from moderation checks.
Filtering CriteriaSocieties 16 00115 i009Clearly outline criteria about content filtering (e.g., toxic words)
Label AI-ContentSocieties 16 00115 i013Mark AI outputs (text, images) and inform that they may contain mistakes.
Error-proneness Societies 16 00115 i013Remind that AI is human-made technology, prone to mistakes and biases.
Accessibility & InclusivityUser Guidance Societies 16 00115 i008Offer tips and sample questions for Chatbot use for new users
Summarization MetricsSocieties 16 00115 i011Inform how much a text was shortened (e.g., summarization length or percentage)
Assistive roleSocieties 16 00115 i008Communicate that the chatbot is an error-prone assistant to avoid self-blame of users
Feature PurposeSocieties 16 00115 i013Justify why each AI function (e.g., speech-to-text) is offered (e.g., accessibility)
DialectSocieties 16 00115 i012Advise users not to speak in dialect for better recognition accuracy.
Clear SpeechSocieties 16 00115 i012Encourage distinct, clear speech to optimize speech-to-text results.
Legend of concerning tools: Societies 16 00115 i008 = Chatbots; Societies 16 00115 i009 = Spam-filters/Moderation; Societies 16 00115 i011 = Simplification/Summarization/Translation; Societies 16 00115 i012 = Speech-to-text transformer; Societies 16 00115 i013 = Tool-unspecific.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Schneller, M.; Bedek, M.; De Lera, E.; Kocsis, O.; Seier, J.; Albert, D. Inclusive AI-Enhanced Civic Engagement: Empowering Marginalized Voices. Societies 2026, 16, 115. https://doi.org/10.3390/soc16040115

AMA Style

Schneller M, Bedek M, De Lera E, Kocsis O, Seier J, Albert D. Inclusive AI-Enhanced Civic Engagement: Empowering Marginalized Voices. Societies. 2026; 16(4):115. https://doi.org/10.3390/soc16040115

Chicago/Turabian Style

Schneller, Maria, Michael Bedek, Eva De Lera, Otilia Kocsis, Jonas Seier, and Dietrich Albert. 2026. "Inclusive AI-Enhanced Civic Engagement: Empowering Marginalized Voices" Societies 16, no. 4: 115. https://doi.org/10.3390/soc16040115

APA Style

Schneller, M., Bedek, M., De Lera, E., Kocsis, O., Seier, J., & Albert, D. (2026). Inclusive AI-Enhanced Civic Engagement: Empowering Marginalized Voices. Societies, 16(4), 115. https://doi.org/10.3390/soc16040115

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop