Artificial Intelligence in Participatory Environments: Technologies, Ethics, and Literacy Aspects

A special issue of Societies (ISSN 2075-4698).

Deadline for manuscript submissions: 31 December 2024 | Viewed by 13246

Special Issue Editors


E-Mail Website
Guest Editor
Multidisciplinary Media and Mediated Communication (M3C) Research Group, School of Journalism & Mass Communications, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
Interests: participatory journalism; online news production; user-generated content; legal issues in media organizations

E-Mail Website
Guest Editor
Multidisciplinary Media and Mediated Communication (M3C) Research Group, School of Journalism & Mass Communications, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
Interests: media technologies; signal processing; machine learning; media authentication; audiovisual content management; multimedia semantics; semantic web
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) has constituted a significant scholarly object over the past few decades, profoundly impacting a broad spectrum of academic and industrial fields. The area of AI has exploded in recent years along with participatory tools and media environments. In this age of fragmented information flows and vast amounts of raw data, computational developments along with socio-economic changes facilitate the incorporation of AI technologies in areas ranging from mathematics, engineering, and medical science to psychology, education, media, and communications. Diverse aspects of people’s daily lives are also formed under the driving power of AI tools and systems. Applications based on machine/deep learning (ML/DL) and natural language processing (NLP) techniques increasingly play a considerable role in living, learning, working, and co-situating in collaborative and participatory environments.

Although the utilization of such algorithmic approaches and technologies offers significant benefits for society, the need to consider arising risks and challenges is strong. The request for ethical codes in the use of AI concerns not only the machine training part and the design of the targeted functionalities but also the deployment and implementation of the envisioned services. For example, the acquisition of Facebook users’ personal data by Cambridge Analytica or the role of Twitter bots in the United States Presidential Election of 2016 stand as milestones in the ongoing discussion about AI misusage. Likewise, disinformation problems have been substantially intensified with the proliferation of generative content and deep learning models, launching the so-called deep fakes, which pose severe threats to our societies and democracies. More broadly, issues about transparency, accountability, and justice deserve consideration. Data integrity, privacy, and security protocols are always in place when users and (crowdsourced) datasets are involved. In this vein, initial steps towards a necessary framework have been conducted by national and international authorities. However, the development of precise regulatory guidelines is of great importance in terms of security, data protection, bias, and discrimination avoidance, among others. Against this background and since AI implications are increasingly omnipresent, the prioritization of literacy and educational initiatives should include all actors involved (stakeholders, developers, targeted end users, media and communication professionals, journalists, practitioners, etc.). Thus, a multidisciplinary approach can shape the context for a deeper understanding and the harmless use of AI without overlooking the constantly evolving (technological) landscape.

The current call for papers (CfP) aims at further enlightening the above perspectives. We invite researchers to submit original/featured research works related but not limited to the following multidisciplinary topics:

  • AI techniques in participatory tools and collaborative environments;
  • AI ethics;
  • AI education and multidisciplinary literacy needs;
  • Audience engagement in data crowdsourcing and annotation tasks;
  • Datasets utilization, ethics, and legal concerns in AI;
  • Participatory media, journalism, and AI perspectives;
  • Hate speech detection using AI;
  • Hate crime prevention using AI;
  • AI tools in misinformation and disinformation detection;
  • AI-assisted forensics tools: legal and ethical concerns;
  • AI-assisted management of media assets and/or use rights: technological and ethical concerns;
  • Technological and ethical concerns of big data;
  • Smart systems for education and collaborative working environments;
  • AI-assisted citizen science: Technological limitation, ethics, and training concerns.

Contributions must follow one of the three categories of papers for the journal (article, conceptual paper, or review) and address the topic of the Special Issue.

Dr. Theodora Saridou
Prof. Dr. Charalampos Dimoulas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as conceptual papers are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Societies is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • (AI) ethics
  • media industry
  • education
  • digital literacy
  • (algorithmic) journalism
  • participatory/citizen’s journalism
  • machine/deep learning
  • datasets (crowdsourcing, annotation, utilization)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

19 pages, 973 KiB  
Article
Training in Co-Creation as a Methodological Approach to Improve AI Fairness
by Ian Slesinger, Evren Yalaz, Stavroula Rizou, Marta Gibin, Emmanouil Krasanakis and Symeon Papadopoulos
Societies 2024, 14(12), 259; https://doi.org/10.3390/soc14120259 - 3 Dec 2024
Viewed by 526
Abstract
Participatory design (PD) and co-creation (Co-C) approaches to building Artificial Intelligence (AI) systems have become increasingly popular exercises for ensuring greater social inclusion and fairness in technological transformation by accounting for the experiences of vulnerable or disadvantaged social groups; however, such design work [...] Read more.
Participatory design (PD) and co-creation (Co-C) approaches to building Artificial Intelligence (AI) systems have become increasingly popular exercises for ensuring greater social inclusion and fairness in technological transformation by accounting for the experiences of vulnerable or disadvantaged social groups; however, such design work is challenging in practice, partly because of the inaccessible domain of technical expertise inherent to AI design. This paper evaluates a methodological approach to make addressing AI bias more accessible by incorporating a training component on AI bias in a Co-C process with vulnerable and marginalized participant groups. This was applied by socio-technical researchers involved in creating an AI bias mitigation developer toolkit. This paper’s analysis emphasizes that critical reflection on how to use training in Co-C appropriately and how such training should be designed and implemented is necessary to ensure training allows for a genuinely more inclusive approach to AI systems design when those most at risk of being adversely affected by AI technologies are often not the intended end-users of said technologies. This is acutely relevant as Co-C exercises are increasingly used to demonstrate regulatory compliance and ethical practice by powerful institutions and actors developing AI systems, particularly in the ethical and regulatory environment coalescing around the European Union’s recent AI Act. Full article
Show Figures

Figure 1

11 pages, 270 KiB  
Article
Exploring Greek Students’ Attitudes Toward Artificial Intelligence: Relationships with AI Ethics, Media, and Digital Literacy
by Asimina Saklaki and Antonis Gardikiotis
Societies 2024, 14(12), 248; https://doi.org/10.3390/soc14120248 - 23 Nov 2024
Viewed by 530
Abstract
This exploratory study (N = 310) investigates the relationship between students’ attitudes toward artificial intelligence (AI), their attitudes toward AI ethics, and their media and digital literacy levels. This study’s specific objectives were to examine students’ (a) general attitudes toward AI, (b) [...] Read more.
This exploratory study (N = 310) investigates the relationship between students’ attitudes toward artificial intelligence (AI), their attitudes toward AI ethics, and their media and digital literacy levels. This study’s specific objectives were to examine students’ (a) general attitudes toward AI, (b) attitudes toward AI ethics, (c) the relationship between the two, and (d) whether attitudes toward AI are associated with media and digital literacy. Participants, drawn from a convenience sample of university students, completed an online survey including four scales: (a) a general attitude toward AI scale (including two subscales, positive and negative attitudes), (b) an attitude toward AI ethics scale (including two subscales, attitudes toward accountable and non-accountable AI use), (c) a media literacy scale, and (d) a digital literacy scale, alongside demographic information. The findings revealed that students held moderate positive attitudes toward AI and strong attitudes favoring accountable AI use. Interestingly, media literacy was positively related to accountable AI use and negatively to positive attitudes toward AI, whereas digital literacy was positively related to positive attitudes, and negatively to negative attitudes toward AI. These findings carry significant theoretical implications by highlighting the unique relationship of distinct literacies (digital and media) with students’ attitudes. They also offer practical insights for educators, technology designers, and administrators, emphasizing the need to address ethical considerations in AI deployment. Full article
18 pages, 247 KiB  
Article
Digital Mirrors: AI Companions and the Self
by Theodoros Kouros and Venetia Papa
Societies 2024, 14(10), 200; https://doi.org/10.3390/soc14100200 - 8 Oct 2024
Viewed by 4455
Abstract
This exploratory study examines the socio-technical dynamics of Artificial Intelligence Companions (AICs), focusing on user interactions with AI platforms like Replika 9.35.1. Through qualitative analysis, including user interviews and digital ethnography, we explored the nuanced roles played by these AIs in social interactions. [...] Read more.
This exploratory study examines the socio-technical dynamics of Artificial Intelligence Companions (AICs), focusing on user interactions with AI platforms like Replika 9.35.1. Through qualitative analysis, including user interviews and digital ethnography, we explored the nuanced roles played by these AIs in social interactions. Findings revealed that users often form emotional attachments to their AICs, viewing them as empathetic and supportive, thus enhancing emotional well-being. This study highlights how AI companions provide a safe space for self-expression and identity exploration, often without fear of judgment, offering a backstage setting in Goffmanian terms. This research contributes to the discourse on AI’s societal integration, emphasizing how, in interactions with AICs, users often craft and experiment with their identities by acting in ways they would avoid in face-to-face or human-human online interactions due to fear of judgment. This reflects front-stage behavior, in which users manage audience perceptions. Conversely, the backstage, typically hidden, is somewhat disclosed to AICs, revealing deeper aspects of the self. Full article
20 pages, 17928 KiB  
Article
AI-Generated Graffiti Simulation for Building Façade and City Fabric
by Naai-Jung Shih
Societies 2024, 14(8), 142; https://doi.org/10.3390/soc14080142 - 3 Aug 2024
Viewed by 848
Abstract
Graffiti represents a multi-disciplinary social behavior. It is used to annotate urban landscapes under the assumption that building façades will constantly evolve and acquire modified skins. This study aimed to simulate the interaction between building façades and generative AI-based graffiti using Stable Diffusion [...] Read more.
Graffiti represents a multi-disciplinary social behavior. It is used to annotate urban landscapes under the assumption that building façades will constantly evolve and acquire modified skins. This study aimed to simulate the interaction between building façades and generative AI-based graffiti using Stable Diffusion® (SD v 1.7.0). The context used for graffiti generation considered the graffiti as the third skin, the remodeled façade as the second skin, and the original façade as the first skin. Graffiti was created based on plain-text descriptions, representative images, renderings of scaled 3D prototype models, and characteristic façades obtained from various seed elaborations. It was then generated from either existing graffiti or the abovementioned context; overlaid upon a campus or city; and judged based on various criteria: style, area, altitude, orientation, distribution, and development. I found that rescaling and reinterpreting the context presented the most creative results: it allowed unexpected interactions between the urban fabric and the dynamics created to be foreseen by elaborating on the context and due to the divergent instrumentation used for the first, second, and third skins. With context awareness or homogeneous aggregation, graphic partitions can thus be merged into new topologically re-arranged polygons that enable a cross-gap creative layout. Almost all façades were found to be applicable. AI generation enhances awareness of the urban fabric and facilitates a review of both the human scale and buildings. AI-based virtual governance can use generative graffiti to facilitate the implementation of preventive measures in an urban context. Full article
Show Figures

Figure 1

18 pages, 1717 KiB  
Article
Importance of University Students’ Perception of Adoption and Training in Artificial Intelligence Tools
by José Carlos Vázquez-Parra, Carolina Henao-Rodríguez, Jenny Paola Lis-Gutiérrez and Sergio Palomino-Gámez
Societies 2024, 14(8), 141; https://doi.org/10.3390/soc14080141 - 3 Aug 2024
Viewed by 3065
Abstract
Undoubtedly, artificial intelligence (AI) tools are becoming increasingly common in people’s lives. The educational field is one of the most reflective on the importance of its adoption. Universities have made great efforts to integrate these new technologies into their classrooms, considering that every [...] Read more.
Undoubtedly, artificial intelligence (AI) tools are becoming increasingly common in people’s lives. The educational field is one of the most reflective on the importance of its adoption. Universities have made great efforts to integrate these new technologies into their classrooms, considering that every future professional will need AI skills and competencies. This article examines the importance of student perception and acceptance in adopting AI tools in higher education effectively. It highlights how students’ positive perceptions can significantly influence their motivation and commitment to learning. This research emphasizes that to integrate AI into university curricula successfully, it is essential to include its technologies in all areas of study and foster positivity among students regarding their use and training. This study’s methodology applied the validated instrument “Perception of Adoption and Training in the Use of Artificial Intelligence Tools in the Profession” to a sample of Mexican students. This exploratory analysis highlights the need for educational institutions to understand and address student perceptions of AI to design educational strategies that incorporate technological advances, are pedagogically relevant, and align with the students’ aspirations and needs. Full article
Show Figures

Figure 1

Other

Jump to: Research

16 pages, 619 KiB  
Concept Paper
Artificial Intelligence on Food Vulnerability: Future Implications within a Framework of Opportunities and Challenges
by Diosey Ramon Lugo-Morin
Societies 2024, 14(7), 106; https://doi.org/10.3390/soc14070106 - 29 Jun 2024
Viewed by 1639
Abstract
This study explores the field of artificial intelligence (AI) through the lens of Stephen Hawking, who warned of its potential dangers. It aims to provide a comprehensive understanding of AI and its implications for food security using a qualitative approach and offering a [...] Read more.
This study explores the field of artificial intelligence (AI) through the lens of Stephen Hawking, who warned of its potential dangers. It aims to provide a comprehensive understanding of AI and its implications for food security using a qualitative approach and offering a contemporary perspective on the topic. The study explores the challenges and opportunities presented by AI in various fields with an emphasis on the global food reality. It also highlights the critical importance of striking a harmonious balance between technological progress and the preservation of local wisdom, cultural diversity, and environmental sustainability. In conclusion, the analysis argues that AI is a transformative force with the potential to address global food shortages and facilitate sustainable food production. However, it is not without significant risks that require rigorous scrutiny and ethical oversight. Full article
Show Figures

Figure 1

Back to TopTop