Artificial Intelligence in Participatory Environments: Technologies, Ethics, and Literacy Aspects

A special issue of Societies (ISSN 2075-4698).

Deadline for manuscript submissions: 30 June 2025 | Viewed by 42253

Special Issue Editors


E-Mail Website
Guest Editor
Multidisciplinary Media and Mediated Communication (M3C) Research Group, School of Journalism & Mass Communications, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
Interests: participatory journalism; online news production; user-generated content; legal issues in media organizations

E-Mail Website
Guest Editor
Multidisciplinary Media and Mediated Communication (M3C) Research Group, School of Journalism & Mass Communications, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
Interests: media technologies; signal processing; machine learning; media authentication; audiovisual content management; multimedia semantics; semantic web
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) has constituted a significant scholarly object over the past few decades, profoundly impacting a broad spectrum of academic and industrial fields. The area of AI has exploded in recent years along with participatory tools and media environments. In this age of fragmented information flows and vast amounts of raw data, computational developments along with socio-economic changes facilitate the incorporation of AI technologies in areas ranging from mathematics, engineering, and medical science to psychology, education, media, and communications. Diverse aspects of people’s daily lives are also formed under the driving power of AI tools and systems. Applications based on machine/deep learning (ML/DL) and natural language processing (NLP) techniques increasingly play a considerable role in living, learning, working, and co-situating in collaborative and participatory environments.

Although the utilization of such algorithmic approaches and technologies offers significant benefits for society, the need to consider arising risks and challenges is strong. The request for ethical codes in the use of AI concerns not only the machine training part and the design of the targeted functionalities but also the deployment and implementation of the envisioned services. For example, the acquisition of Facebook users’ personal data by Cambridge Analytica or the role of Twitter bots in the United States Presidential Election of 2016 stand as milestones in the ongoing discussion about AI misusage. Likewise, disinformation problems have been substantially intensified with the proliferation of generative content and deep learning models, launching the so-called deep fakes, which pose severe threats to our societies and democracies. More broadly, issues about transparency, accountability, and justice deserve consideration. Data integrity, privacy, and security protocols are always in place when users and (crowdsourced) datasets are involved. In this vein, initial steps towards a necessary framework have been conducted by national and international authorities. However, the development of precise regulatory guidelines is of great importance in terms of security, data protection, bias, and discrimination avoidance, among others. Against this background and since AI implications are increasingly omnipresent, the prioritization of literacy and educational initiatives should include all actors involved (stakeholders, developers, targeted end users, media and communication professionals, journalists, practitioners, etc.). Thus, a multidisciplinary approach can shape the context for a deeper understanding and the harmless use of AI without overlooking the constantly evolving (technological) landscape.

The current call for papers (CfP) aims at further enlightening the above perspectives. We invite researchers to submit original/featured research works related but not limited to the following multidisciplinary topics:

  • AI techniques in participatory tools and collaborative environments;
  • AI ethics;
  • AI education and multidisciplinary literacy needs;
  • Audience engagement in data crowdsourcing and annotation tasks;
  • Datasets utilization, ethics, and legal concerns in AI;
  • Participatory media, journalism, and AI perspectives;
  • Hate speech detection using AI;
  • Hate crime prevention using AI;
  • AI tools in misinformation and disinformation detection;
  • AI-assisted forensics tools: legal and ethical concerns;
  • AI-assisted management of media assets and/or use rights: technological and ethical concerns;
  • Technological and ethical concerns of big data;
  • Smart systems for education and collaborative working environments;
  • AI-assisted citizen science: Technological limitation, ethics, and training concerns.

Contributions must follow one of the three categories of papers for the journal (article, conceptual paper, or review) and address the topic of the Special Issue.

Dr. Theodora Saridou
Prof. Dr. Charalampos Dimoulas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as conceptual papers are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Societies is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • (AI) ethics
  • media industry
  • education
  • digital literacy
  • (algorithmic) journalism
  • participatory/citizen’s journalism
  • machine/deep learning
  • datasets (crowdsourcing, annotation, utilization)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

17 pages, 237 KiB  
Article
Journalists’ Perspectives on the Role of Artificial Intelligence in Enhancing Quality Journalism in Greek Local Media
by Zoi Palla and Ioanna Kostarella
Societies 2025, 15(4), 89; https://doi.org/10.3390/soc15040089 - 31 Mar 2025
Viewed by 647
Abstract
The transformative influence of digitalization on journalism is evident across multiple dimensions of the industry. Artificial Intelligence (AI) is reshaping how news is produced, distributed, and consumed, from small local newsrooms to global media organizations, offering benefits such as increased speed, efficiency, and [...] Read more.
The transformative influence of digitalization on journalism is evident across multiple dimensions of the industry. Artificial Intelligence (AI) is reshaping how news is produced, distributed, and consumed, from small local newsrooms to global media organizations, offering benefits such as increased speed, efficiency, and personalization. However, the most critical role AI can play lies in upholding the high standards of accuracy, credibility, and depth that define quality journalism. The ongoing digital transformation prompts a re-evaluation of journalistic norms and practices, positioning quality at the forefront of discussions. This paper focuses on Greece’s media market that encountered a severe economic crisis and, more specifically, to the Greek local media landscape to investigate the complex relationship between AI and journalism in regional media organizations. More specific, the study explores how Greek local journalists believe AI can contribute to quality journalism, while upholding the core principles of ethics and integrity. It highlights their perspectives on AI, exploring both their hopes for its potential to improve journalistic practices and their concerns about its impact on journalistic values. Through semi-structured interviews with local media industry stakeholders in Greece—including editors, editors-in-chief, and journalists—this study assesses AI’s influence on journalistic quality in local newsrooms. The findings underscore the necessity of employing AI to elevate content standards rather than compromise them. Our research contributes to the discourse on AI in journalism and offers valuable insights for journalists, local news organizations, and policymakers navigating the ethical implications of AI adoption in the pursuit of high-quality journalism in Greek local media. Full article
29 pages, 3263 KiB  
Article
Gamified Engagement for Data Crowdsourcing and AI Literacy: An Investigation in Affective Communication Through Speech Emotion Recognition
by Eleni Siamtanidou, Lazaros Vrysis, Nikolaos Vryzas and Charalampos A. Dimoulas
Societies 2025, 15(3), 54; https://doi.org/10.3390/soc15030054 - 22 Feb 2025
Viewed by 732
Abstract
This research investigates the utilization of entertainment approaches, such as serious games and gamification technologies, to address various challenges and implement targeted tasks. Specifically, it details the design and development of an innovative gamified application named “J-Plus”, aimed at both professionals and non-professionals [...] Read more.
This research investigates the utilization of entertainment approaches, such as serious games and gamification technologies, to address various challenges and implement targeted tasks. Specifically, it details the design and development of an innovative gamified application named “J-Plus”, aimed at both professionals and non-professionals in journalism. This application facilitates the enjoyable, efficient, and high-quality collection of emotionally tagged speech samples, enhancing the performance and robustness of speech emotion recognition (SER) systems. Additionally, these approaches offer significant educational benefits, providing users with knowledge about emotional speech and artificial intelligence (AI) mechanisms while promoting digital skills. This project was evaluated by 48 participants, with 44 engaging in quantitative assessments and 4 forming an expert group for qualitative methodologies. This evaluation validated the research questions and hypotheses, demonstrating the application’s diverse benefits. Key findings indicate that gamified features can effectively support learning and attract users, with approximately 70% of participants agreeing that serious games and gamification could enhance their motivation to practice and improve their emotional speech. Additionally, 50% of participants identified social interaction features, such as collaboration, as most beneficial for fostering motivation and commitment. The integration of these elements supports reliable and extensive data collection and the advancement of AI algorithms while concurrently developing various skills, such as emotional speech articulation and digital literacy. This paper advocates for the creation of collaborative environments and digital communities through crowdsourcing, balancing technological innovation in the SER sector. Full article
Show Figures

Figure 1

22 pages, 1614 KiB  
Article
The Intersection of AI, Ethics, and Journalism: Greek Journalists’ and Academics’ Perspectives
by Panagiota (Naya) Kalfeli and Christina Angeli
Societies 2025, 15(2), 22; https://doi.org/10.3390/soc15020022 - 25 Jan 2025
Viewed by 2301
Abstract
This study aims to explore the perceptions of Greek journalists and academics on the use of artificial intelligence (AI) in Greek journalism, focusing on its benefits, risks, and potential ethical dilemmas. In particular, it seeks to (i) assess the extent of the use [...] Read more.
This study aims to explore the perceptions of Greek journalists and academics on the use of artificial intelligence (AI) in Greek journalism, focusing on its benefits, risks, and potential ethical dilemmas. In particular, it seeks to (i) assess the extent of the use of AI tools by Greek journalists; (ii) investigate views on how AI might alter news production, work routines, and labor relations in the field; and (iii) examine perspectives on the ethical challenges of AI in journalism, particularly in regard to AI-generated images in media content. To achieve this, a series of 28 in-depth semi-structured interviews was conducted with Greek journalists and academics. A thematic analysis was employed to identify key themes and patterns. Overall, the findings suggest that AI penetration in Greek journalism is in its early stages, with no formal training, strategy, or framework in place within Greek media. Regarding ethical concerns, there is evident skepticism and caution among journalists and academics about issues, such as, data bias, transparency, privacy, and copyright, which are further intensified by the absence of a regulatory framework. Full article
Show Figures

Figure 1

13 pages, 2280 KiB  
Article
Measuring Destination Image Using AI and Big Data: Kastoria’s Image on TripAdvisor
by Anastasia Yannacopoulou and Konstantinos Kallinikos
Societies 2025, 15(1), 5; https://doi.org/10.3390/soc15010005 - 28 Dec 2024
Viewed by 1993
Abstract
In recent years, the growing number of Online Travel Review (OTR) platforms and advances in social media and search engine technologies have led to a new way of accessing information for tourists, placing projected Tourist Destination Image (TDI) and electronic Word of Mouth [...] Read more.
In recent years, the growing number of Online Travel Review (OTR) platforms and advances in social media and search engine technologies have led to a new way of accessing information for tourists, placing projected Tourist Destination Image (TDI) and electronic Word of Mouth (eWoM) at the heart of travel decision-making. This research introduces a big data-driven approach to analyzing and measuring the perceived and conveyed TDI in OTRs concerning the reflected perceptive, spatial, and affective dimensions of search results. To test this approach, a massive metadata analysis of search engine was conducted on approximately 2700 reviews from TripAdvisor users for the category “Attractions” of the city of Kastoria, Greece. Using artificial intelligence, an analysis of the photos accompanying user comments on TripAdvisor was performed. Based on the results, we created five themes for the image narratives, depending on the focus of interest (monument, activity, self, other person, and unknown) in which the content was categorized. The results obtained allow us to extract information that can be used in business intelligence applications. Full article
Show Figures

Figure 1

15 pages, 242 KiB  
Article
Generative AI in Education: Assessing Usability, Ethical Implications, and Communication Effectiveness
by Maria Matsiola, Georgios Lappas and Anastasia Yannacopoulou
Societies 2024, 14(12), 267; https://doi.org/10.3390/soc14120267 - 17 Dec 2024
Viewed by 4781
Abstract
The rapid expansion of generative artificial intelligence tools for textual production, such as ChatGPT, has been accompanied by a proliferation of similar tools used for creating images, audiovisual content, and motion graphics. These tools, valued for their creativity, are increasingly employed in the [...] Read more.
The rapid expansion of generative artificial intelligence tools for textual production, such as ChatGPT, has been accompanied by a proliferation of similar tools used for creating images, audiovisual content, and motion graphics. These tools, valued for their creativity, are increasingly employed in the fields of art, education, and entertainment to enhance content creation, particularly on social media, while also reducing production costs. However, their use is not without controversy, as they raise significant ethical concerns, including the potential for generating fake news and disinformation. This paper presents an analysis of higher education students’ perspectives on the use of generative artificial intelligence tools within the context of a university course. The research was conducted through semi-structured interviews with 10 fourth-year students from the Department of Communication and Digital Media at the University of Western Macedonia. The study aims to provide an initial understanding of the impact of these tools in both education and communication, focusing on students who are future professionals in the communication field. The interviews explored the potential benefits of these technologies, which were valued highly, and the challenges presented such as privacy and credibility issues, which concerned the participants. Misinformation and deception were cited as the most significant risks, while these tools were evaluated positively in terms of communicative purposes, but still maintaining skepticism. Full article
19 pages, 973 KiB  
Article
Training in Co-Creation as a Methodological Approach to Improve AI Fairness
by Ian Slesinger, Evren Yalaz, Stavroula Rizou, Marta Gibin, Emmanouil Krasanakis and Symeon Papadopoulos
Societies 2024, 14(12), 259; https://doi.org/10.3390/soc14120259 - 3 Dec 2024
Viewed by 1740
Abstract
Participatory design (PD) and co-creation (Co-C) approaches to building Artificial Intelligence (AI) systems have become increasingly popular exercises for ensuring greater social inclusion and fairness in technological transformation by accounting for the experiences of vulnerable or disadvantaged social groups; however, such design work [...] Read more.
Participatory design (PD) and co-creation (Co-C) approaches to building Artificial Intelligence (AI) systems have become increasingly popular exercises for ensuring greater social inclusion and fairness in technological transformation by accounting for the experiences of vulnerable or disadvantaged social groups; however, such design work is challenging in practice, partly because of the inaccessible domain of technical expertise inherent to AI design. This paper evaluates a methodological approach to make addressing AI bias more accessible by incorporating a training component on AI bias in a Co-C process with vulnerable and marginalized participant groups. This was applied by socio-technical researchers involved in creating an AI bias mitigation developer toolkit. This paper’s analysis emphasizes that critical reflection on how to use training in Co-C appropriately and how such training should be designed and implemented is necessary to ensure training allows for a genuinely more inclusive approach to AI systems design when those most at risk of being adversely affected by AI technologies are often not the intended end-users of said technologies. This is acutely relevant as Co-C exercises are increasingly used to demonstrate regulatory compliance and ethical practice by powerful institutions and actors developing AI systems, particularly in the ethical and regulatory environment coalescing around the European Union’s recent AI Act. Full article
Show Figures

Figure 1

11 pages, 270 KiB  
Article
Exploring Greek Students’ Attitudes Toward Artificial Intelligence: Relationships with AI Ethics, Media, and Digital Literacy
by Asimina Saklaki and Antonis Gardikiotis
Societies 2024, 14(12), 248; https://doi.org/10.3390/soc14120248 - 23 Nov 2024
Cited by 1 | Viewed by 1868
Abstract
This exploratory study (N = 310) investigates the relationship between students’ attitudes toward artificial intelligence (AI), their attitudes toward AI ethics, and their media and digital literacy levels. This study’s specific objectives were to examine students’ (a) general attitudes toward AI, (b) [...] Read more.
This exploratory study (N = 310) investigates the relationship between students’ attitudes toward artificial intelligence (AI), their attitudes toward AI ethics, and their media and digital literacy levels. This study’s specific objectives were to examine students’ (a) general attitudes toward AI, (b) attitudes toward AI ethics, (c) the relationship between the two, and (d) whether attitudes toward AI are associated with media and digital literacy. Participants, drawn from a convenience sample of university students, completed an online survey including four scales: (a) a general attitude toward AI scale (including two subscales, positive and negative attitudes), (b) an attitude toward AI ethics scale (including two subscales, attitudes toward accountable and non-accountable AI use), (c) a media literacy scale, and (d) a digital literacy scale, alongside demographic information. The findings revealed that students held moderate positive attitudes toward AI and strong attitudes favoring accountable AI use. Interestingly, media literacy was positively related to accountable AI use and negatively to positive attitudes toward AI, whereas digital literacy was positively related to positive attitudes, and negatively to negative attitudes toward AI. These findings carry significant theoretical implications by highlighting the unique relationship of distinct literacies (digital and media) with students’ attitudes. They also offer practical insights for educators, technology designers, and administrators, emphasizing the need to address ethical considerations in AI deployment. Full article
18 pages, 247 KiB  
Article
Digital Mirrors: AI Companions and the Self
by Theodoros Kouros and Venetia Papa
Societies 2024, 14(10), 200; https://doi.org/10.3390/soc14100200 - 8 Oct 2024
Cited by 1 | Viewed by 10457
Abstract
This exploratory study examines the socio-technical dynamics of Artificial Intelligence Companions (AICs), focusing on user interactions with AI platforms like Replika 9.35.1. Through qualitative analysis, including user interviews and digital ethnography, we explored the nuanced roles played by these AIs in social interactions. [...] Read more.
This exploratory study examines the socio-technical dynamics of Artificial Intelligence Companions (AICs), focusing on user interactions with AI platforms like Replika 9.35.1. Through qualitative analysis, including user interviews and digital ethnography, we explored the nuanced roles played by these AIs in social interactions. Findings revealed that users often form emotional attachments to their AICs, viewing them as empathetic and supportive, thus enhancing emotional well-being. This study highlights how AI companions provide a safe space for self-expression and identity exploration, often without fear of judgment, offering a backstage setting in Goffmanian terms. This research contributes to the discourse on AI’s societal integration, emphasizing how, in interactions with AICs, users often craft and experiment with their identities by acting in ways they would avoid in face-to-face or human-human online interactions due to fear of judgment. This reflects front-stage behavior, in which users manage audience perceptions. Conversely, the backstage, typically hidden, is somewhat disclosed to AICs, revealing deeper aspects of the self. Full article
20 pages, 17928 KiB  
Article
AI-Generated Graffiti Simulation for Building Façade and City Fabric
by Naai-Jung Shih
Societies 2024, 14(8), 142; https://doi.org/10.3390/soc14080142 - 3 Aug 2024
Viewed by 1371
Abstract
Graffiti represents a multi-disciplinary social behavior. It is used to annotate urban landscapes under the assumption that building façades will constantly evolve and acquire modified skins. This study aimed to simulate the interaction between building façades and generative AI-based graffiti using Stable Diffusion [...] Read more.
Graffiti represents a multi-disciplinary social behavior. It is used to annotate urban landscapes under the assumption that building façades will constantly evolve and acquire modified skins. This study aimed to simulate the interaction between building façades and generative AI-based graffiti using Stable Diffusion® (SD v 1.7.0). The context used for graffiti generation considered the graffiti as the third skin, the remodeled façade as the second skin, and the original façade as the first skin. Graffiti was created based on plain-text descriptions, representative images, renderings of scaled 3D prototype models, and characteristic façades obtained from various seed elaborations. It was then generated from either existing graffiti or the abovementioned context; overlaid upon a campus or city; and judged based on various criteria: style, area, altitude, orientation, distribution, and development. I found that rescaling and reinterpreting the context presented the most creative results: it allowed unexpected interactions between the urban fabric and the dynamics created to be foreseen by elaborating on the context and due to the divergent instrumentation used for the first, second, and third skins. With context awareness or homogeneous aggregation, graphic partitions can thus be merged into new topologically re-arranged polygons that enable a cross-gap creative layout. Almost all façades were found to be applicable. AI generation enhances awareness of the urban fabric and facilitates a review of both the human scale and buildings. AI-based virtual governance can use generative graffiti to facilitate the implementation of preventive measures in an urban context. Full article
Show Figures

Figure 1

18 pages, 1717 KiB  
Article
Importance of University Students’ Perception of Adoption and Training in Artificial Intelligence Tools
by José Carlos Vázquez-Parra, Carolina Henao-Rodríguez, Jenny Paola Lis-Gutiérrez and Sergio Palomino-Gámez
Societies 2024, 14(8), 141; https://doi.org/10.3390/soc14080141 - 3 Aug 2024
Cited by 3 | Viewed by 5041
Abstract
Undoubtedly, artificial intelligence (AI) tools are becoming increasingly common in people’s lives. The educational field is one of the most reflective on the importance of its adoption. Universities have made great efforts to integrate these new technologies into their classrooms, considering that every [...] Read more.
Undoubtedly, artificial intelligence (AI) tools are becoming increasingly common in people’s lives. The educational field is one of the most reflective on the importance of its adoption. Universities have made great efforts to integrate these new technologies into their classrooms, considering that every future professional will need AI skills and competencies. This article examines the importance of student perception and acceptance in adopting AI tools in higher education effectively. It highlights how students’ positive perceptions can significantly influence their motivation and commitment to learning. This research emphasizes that to integrate AI into university curricula successfully, it is essential to include its technologies in all areas of study and foster positivity among students regarding their use and training. This study’s methodology applied the validated instrument “Perception of Adoption and Training in the Use of Artificial Intelligence Tools in the Profession” to a sample of Mexican students. This exploratory analysis highlights the need for educational institutions to understand and address student perceptions of AI to design educational strategies that incorporate technological advances, are pedagogically relevant, and align with the students’ aspirations and needs. Full article
Show Figures

Figure 1

Other

Jump to: Research

16 pages, 587 KiB  
Concept Paper
Exploring AI Amid the Hype: A Critical Reflection Around the Applications and Implications of AI in Journalism
by Paschalia (Lia) Spyridou and Maria Ioannou
Societies 2025, 15(2), 23; https://doi.org/10.3390/soc15020023 - 28 Jan 2025
Viewed by 2885
Abstract
Over the last decade, AI has increasingly been adopted by newsrooms in the form of different tools aiming to support journalists and augment the capabilities of the profession. The main idea behind the adoption of AI is that it can make journalists’ work [...] Read more.
Over the last decade, AI has increasingly been adopted by newsrooms in the form of different tools aiming to support journalists and augment the capabilities of the profession. The main idea behind the adoption of AI is that it can make journalists’ work more efficient, freeing them up from some repetitive or routine tasks while enhancing their research and storytelling techniques. Against this idea, and drawing on the concept of “hype”, we employ a critical reflection on the lens often used to talk about journalism and AI. We suggest that the severe sustainability crisis of journalism, rooted in growing pressure from platforms and major corporate competitors, changing news consumption habits and rituals and the growing technologization of news media, leads to the obsessive pursuit of technology in the absence of clear and research-informed strategies which cater to journalism’s civic role. As AI is changing and (re)shaping norms and practices associated with news making, many questions and debates are raised pertaining to the quality and plurality of outputs created by AI. Given the disproportionate attention paid to technological innovation with little interpretation, the present article explores how AI is impacting journalism. Additionally, using the political economy framework, we analyze the fundamental issues and challenges journalism is faced with in terms of both practices and professional sustainability. In the process, we untangle the AI hype and attempt to shed light on how AI can help journalism regain its civic role. We argue that despite the advantages AI provides to journalism, we should avoid the “shiny things perspective”, which tends to emphasize productivity and profitability, and rather focus on the constructive synergy of humans and machines to achieve the six or seven things journalism can do for democracy. Otherwise, we are heading toward “alien intelligence” which is agnostic to the core normative values of journalism. Full article
Show Figures

Figure 1

16 pages, 1853 KiB  
Concept Paper
Generative Artificial Intelligence and Regulations: Can We Plan a Resilient Journey Toward the Safe Application of Generative Artificial Intelligence?
by Matteo Bodini
Societies 2024, 14(12), 268; https://doi.org/10.3390/soc14120268 - 18 Dec 2024
Viewed by 2162
Abstract
The rapid advancements of Generative Artificial Intelligence (GenAI) technologies, such as the well-known OpenAI ChatGPT and Microsoft Copilot, have sparked significant societal, economic, and regulatory challenges. Indeed, while the latter technologies promise unprecedented productivity gains, they also raise several concerns, such as job [...] Read more.
The rapid advancements of Generative Artificial Intelligence (GenAI) technologies, such as the well-known OpenAI ChatGPT and Microsoft Copilot, have sparked significant societal, economic, and regulatory challenges. Indeed, while the latter technologies promise unprecedented productivity gains, they also raise several concerns, such as job loss and displacement, deepfakes, and intellectual property violations. The present article aims to explore the present regulatory landscape of GenAI across the major global players, highlighting the divergent approaches adopted by the United States, United Kingdom, China, and the European Union. By drawing parallels with other complex global issues such as climate change and nuclear proliferation, this paper argues that the available traditional regulatory frameworks may be insufficient to address the unique challenges posed by GenAI. As a result, this article introduces a resilience-focused regulatory approach that emphasizes aspects such as adaptability, swift incident response, and recovery mechanisms to mitigate potential harm. By analyzing the existing regulations and suggesting potential future directions, the present article aims to contribute to the ongoing discourse on how to effectively govern GenAI technologies in a rapidly evolving regulatory landscape. Full article
Show Figures

Figure 1

16 pages, 619 KiB  
Concept Paper
Artificial Intelligence on Food Vulnerability: Future Implications within a Framework of Opportunities and Challenges
by Diosey Ramon Lugo-Morin
Societies 2024, 14(7), 106; https://doi.org/10.3390/soc14070106 - 29 Jun 2024
Cited by 4 | Viewed by 3070
Abstract
This study explores the field of artificial intelligence (AI) through the lens of Stephen Hawking, who warned of its potential dangers. It aims to provide a comprehensive understanding of AI and its implications for food security using a qualitative approach and offering a [...] Read more.
This study explores the field of artificial intelligence (AI) through the lens of Stephen Hawking, who warned of its potential dangers. It aims to provide a comprehensive understanding of AI and its implications for food security using a qualitative approach and offering a contemporary perspective on the topic. The study explores the challenges and opportunities presented by AI in various fields with an emphasis on the global food reality. It also highlights the critical importance of striking a harmonious balance between technological progress and the preservation of local wisdom, cultural diversity, and environmental sustainability. In conclusion, the analysis argues that AI is a transformative force with the potential to address global food shortages and facilitate sustainable food production. However, it is not without significant risks that require rigorous scrutiny and ethical oversight. Full article
Show Figures

Figure 1

Back to TopTop