Previous Article in Journal
Digital Media Discourse and the Secularization of Germany: A Textual Analysis of News Reporting in 2020–2024
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficiency and Uncertainty: Understanding Journalists’ Attitudes Toward AI Adoption in Greece

by
Maria Matsiola
1,* and
Zacharenia Pilitsidou
2
1
Department of Communication and Digital Media, University of Western Macedonia, 52100 Kastoria, Greece
2
School of Journalism and Mass Communications, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Journal. Media 2025, 6(4), 187; https://doi.org/10.3390/journalmedia6040187 (registering DOI)
Submission received: 31 August 2025 / Revised: 27 September 2025 / Accepted: 29 October 2025 / Published: 31 October 2025

Abstract

In recent years, the concept of artificial intelligence (AI) has garnered increasing scholarly and professional interest, particularly regarding its implementation across various domains, including journalism. As with any emerging technological paradigm, AI must be examined within its contextual framework to elucidate its potential advantages, challenges, and transformative implications. This study, situated within the theoretical lens of Actor–Network Theory, employs a mixed methods approach and, specifically, an explanatory sequential design to explore the integration of AI in contemporary Greek journalism. Primary data were collected through a structured questionnaire (N = 148) administered to professional journalists in Greece, followed by semi-structured interviews with a subset of participants (N = 7). The findings indicate that journalists perceive AI as a tool capable of enhancing work efficiency, minimizing human error, and facilitating the processing of unstructured data. However, respondents also expressed concerns that AI adoption is unlikely to lead to improved financial compensation and may contribute to job displacement within the sector. Additionally, participants emphasized the necessity of regular professional development initiatives, advocating for the organization of seminars on emerging technologies on a biannual or annual basis.

1. Introduction

Journalism practice has always been closely linked to the integration of new technological developments (Spyridou et al., 2013; Lewis et al., 2019; Wiard, 2019). The applications of artificial intelligence (AI) are no exception and are gradually finding their role in the field of journalism. Research is being conducted worldwide, looking for evidence of the transformation that journalism is expected to experience due to the impact of artificial intelligence in many different sectors (Diakopoulos, 2013; Diakopoulos, 2019; Stray, 2021; Túñez-López et al., 2021).
Distinguishing the journalism profession into three stages that concern newsgathering, news production, and news distribution, the changes introduced are sought and do not concern exclusively technological factors but also sociological changes. As Lewis and Usher (2016, p. 553) define, news innovation is “the reimagination of news, both its technological character and its normative function in society”. Research on the role of artificial intelligence technologies moves in different directions, but there is a common effort to maintain the balance between traditional journalistic skills and the ethical background of the profession with the understanding of technological changes and their potential usefulness in the profession.
Algorithms play evolving institutional roles in contemporary media systems, serving both as a response to the explosion of available data and as a force driving media organizations to collect more information and exploit new possibilities (Napoli, 2014). Within this context, automation, data analysis, and news provision in new creative forms, using alternative storytelling techniques, can make a journalist’s work more effective and offer opportunities to reach a wider and/or specialized audience in a more understandable way (Broussard et al., 2019; Kotenidis & Veglis, 2021). However, the use of artificial intelligence techniques raises questions regarding the transparency of the methods used, especially regarding algorithmic decision-making and editorial integrity when creating content in relation to ethical principles and challenges (Dörr & Hollnbuchner, 2017; Noain-Sánchez, 2022). As journalism’s boundaries are reshaped by social, mobile, and interactive media, new actors and norms signal emerging definitions of news in a profession in transition (Lewis & Usher, 2016).
Furthermore, in addition to professionals, media organizations that undergo a period of economic difficulties, to survive in a competitive market, are seeking solutions to address the substantial demand for content across multiple formats, with AI technologies offering potential support, particularly through the automation of specific tasks (Min & Fink, 2021; Parratt Fernández et al., 2024). According to Newman Report 2024, more than half of the publisher respondents (56%) stated that the most important use of technology is AI for back-end news automation (Newman, 2024). Moreover, news organizations increasingly view tasks like social media management and audience engagement as integral to news production and pay attention to web analytics. In this context, new editorial roles that demand skills outside traditional journalism training are needed, while boundary companies are also needed to understand that news production values are trusted (Belair-Gagnon & Holton, 2018). Therefore, peripheral actors who are involved in journalistic tasks use the norms in their fields to satisfy the audience’s needs, while journalism remains at the center (Tandoc, 2019).
Although research on AI in journalism has expanded globally, most studies focus on technologically advanced or large-scale media systems. Far less is known about how AI is perceived and integrated in smaller, economically fragile, and trust-challenged markets such as Greece (Karadimitriou, 2020; Papathanassopoulos et al., 2021; Podara & Matsiola, 2023; Price et al., 2024; Vatikiotis et al., 2024). This gap is important because local conditions, such as economic recession, weak digital infrastructure, and limited institutional support, may significantly shape how journalists view the opportunities and risks of AI adoption.
This study addresses this gap by examining Greek journalists’ perceptions of AI in news production, using Actor–Network Theory (ANT) to analyze the interplay between human and non-human actors in this evolving media ecosystem. By focusing on Greece, this study contributes to understanding how AI adoption intersects with professional ethics, newsroom practices, and external pressures in contexts marked by economic and digital constraints. Based on a mixed research design that employed data collected from professional journalists through questionnaires (quantitative) and was followed by conducted interviews (qualitative), this research aims at revealing Greek journalists’ opinions and apprehensions regarding the use of artificial intelligence.

1.1. Actor–Network Theory

Actor–Network Theory (ANT) explores how human and non-human entities, as heterogeneous elements, interact within a network and attributes roles to both in sociological analysis. Developed in the 1980s within the sociology of science and technology (Latour, 1987, 2005; Law, 1999) rather than focusing on intention, ANT, studying situations of change and evolving practices (Wiard, 2019), emphasizes influence and challenges broad concepts (Crawford, 2020).
Networks are central to Actor–Network Theory (ANT) because they represent the stable relationships or associations through which the world is constructed and organized. ANT provides a framework for exploring the material aspects of digital newswork while enabling the incorporation of technology into social analysis (Stalph, 2019). The journalistic process has been studied within this framework as a collaborative endeavor in which a diversity of agents, human actors and non-human actors—encompassing technologies, in general, and, more recently, artificial intelligence—work together to produce and disseminate news (Spyridou et al., 2013; Primo & Zago, 2014; Lewis & Westlund, 2015; Domingo & Wiard, 2016; Wu et al., 2018; Stalph, 2019; Ryfe, 2022; Nawararthne & Storni, 2023).
In this framework, algorithms and AI technologies, as non-human actors, function as mediators that change journalistic practices, leading to workflow optimization, and shape newsroom dynamics while they redefine job conditions (Beckett, 2019). Meanwhile, journalists, who are human actors, provide contextual insights as they possess the skills of judgment and ethics that are extremely crucial in this profession. Newsrooms are hybrid sociotechnical networks where human and non-human actors collaborate and co-produce news in a news environment. Furthermore, actants outside newsrooms, such as companies that work in the boundaries of media organizations (Belair-Gagnon & Holton, 2018; Sirén-Heikel et al., 2023; Parratt Fernández et al., 2024), are equally significant in shaping the evolving media ecosystem. External actants such as technology providers, platform companies, and commercial imperatives extend the network beyond the newsroom. In this sense, news production emerges as a hybrid practice where professional judgment, ethical considerations, and algorithmic capabilities intersect.

1.2. Journalism and Artificial Intelligence

Artificial intelligence might be the most used term lately, being connected with every aspect of daily life. Journalism and the media industry are not an exception since AI tools and applications are being employed in the news process, while at the same time, concerns are raised regarding their employment. C. G. Lindén (2020) characteristically argues that if earlier machines functioned as mediators while humans acted as communicators, in contemporary reality, the roles are partly re-assigned. As a new form of technology, it is being tested by media owners and professionals to determine its role and usability. Furthermore, scholars propose frameworks that align machine learning development with journalistic ethics, relying on the core principles of ethical journalism, accuracy, fairness, and transparency, through the embedment of accountability and fostering dialog between technical and journalistic communities (Dierickx et al., 2024).
Nevertheless, artificial intelligence technologies can be used at all stages of the journalistic process: in the search for data, while writing and editing news stories, and during the dissemination phase. This intrusion has led to a discussion regarding the role of technology in journalism; whether it is “the aide to a human led endeavor” or the other way around (Zamith & Haim, 2020, p. 1). However, Diakopoulos (2019, p. 13), considering the consequences of the Panama Papers investigative story, in the book Automating the news. How algorithms are rewriting the media, argues that a mantra in this case could be the following: “Automate what computers do best, let people do the rest.” Diakopoulos thus concludes that the synergy between human expertise and machine capabilities is expected to shape the ongoing transformation of newswork.
The following three scenarios of AI integration in journalism were examined by Thäsler-Kordonouri and Koliska (2025), outlining their benefits, risks, and ethical implications: a. the actor-centered scenario: excluding AI; b. the actant-centered scenario: including AI; and c. the hybrid scenario: supervised mutual influence. They argue that over-reliance on automation may reduce transparency, accuracy, and audience trust while weakening journalists’ intuition and professional growth. Furthermore, they claim that as editorial tasks shift toward reviewing AI outputs, core values and socialization in journalism risk are being undermined; thus they stress the importance of maintaining strong human oversight in AI-assisted news production.
Recent research in the Basque Country among 504 journalists has examined their perceptions of AI’s impact on disinformation, reflecting growing concerns about generative AI in news production (Peña-Alonso et al., 2025). Nearly 90% of them believe that AI will substantially increase the risks of disinformation, particularly through challenges in detecting false content, deepfakes, and inaccurate data.
AI has been researched in a variety of applications for journalism. In the field of investigative reporting, Stray (2021) argues that AI can be used in extracting information from diverse documents and linking records across databases since constraints such as the uniqueness of journalistic problems, the lack of accessible or reusable training data, restricted access to relevant information, and the need for very high accuracy to avoid legal and ethical risks prohibit further use. Broussard (2014), claiming that high-impact investigative stories often require long timelines that conflict with current market pressures, making efficiency a critical need in the newsroom, researched an AI-based system that showed how software tools can accelerate the investigative process, enabling the production of compelling, data-rich public affairs stories with meaningful social impact.
The report of the World Association of News Publishers (WAN-IFRA) explores the rise in automated news generation from structured data, showing how it supports rather than replaces journalists by handling repetitive, low-priority tasks (C.-G. Lindén et al., 2019). Drawing on five international case studies, it frames automation as an incremental extension of industrial processes, raising important questions about implementation choices, ethics, and transparency in newsrooms.
de-Lima-Santos and Ceron (2021) studied AI adoption in journalism across seven subfields, finding greater development in machine learning, computer vision, and planning/optimization, while other areas remain underutilized. Furthermore, they concluded that most projects rely on funding from major tech companies, thus limiting broader industry potential and underscoring the need for further research. One of the distinct sectors that has been researched involves Natural Language Generation (NLG), a subfield of AI, which is used to create natural language outputs from structured and unstructured data constraining the content, based on predefined rules and technical limitations (Sirén-Heikel et al., 2023). Scholars argue that NLG works best in domains such as sports, elections, and financial reports that are rich in data (Dörr, 2016). However, an analysis of meta-journalistic discourse around ChatGPT’s launch revealed that journalists view generative AI as a greater threat than earlier automation, defending their authority by emphasizing both collective professionalism and individual voice (Van Dalen, 2024). Additionally, C.-G. Lindén et al. (2019) claim that NLG is not largely exploited in the context of algorithmic journalism due to the complexity of the natural language used in journalistic settings.
There are tools used in newsrooms and tools used by journalists personally; furthermore, there are technological companies that use AI to provide automated news stories for media organizations (Parratt Fernández et al., 2024). Sirén-Heikel et al. (2023) argue that AI tools provide affordances for newsrooms and technologists ease tensions with journalistic practices by framing AI-generated stories as non-journalistic. Furthermore, their interactions with news organizations allow them to adopt elements of journalistic logic that enable them to enhance their work towards more profitable norms for the news industry.

1.3. AI Use in Greek Media

While international research on AI in journalism is expanding, studies on smaller and economically constrained media markets remain limited. Greece represents a particularly relevant case, given its prolonged economic crisis, fragile media ecosystem, and digital transition challenges (Podara & Matsiola, 2023). In the Greek media reality, Kostarella et al. (2025) studied the use of AI in local newsrooms, identifying constraints and exploring opportunities. The findings show that journalists have a limited awareness of AI’s potential to enhance daily work, underscoring the need to address risks and possibilities for regional media in the Mediterranean context. Furthermore, Palla and Kostarella (2025) examined how Greek local journalists perceive AI’s role in enhancing quality journalism amid economic and digital challenges, highlighting both optimism about its potential and concerns about its impact on ethics and values. Through interviews with media professionals, they emphasized that AI should be employed to strengthen, not undermine, the standards of accuracy, credibility, and integrity in journalism.
Kalfeli and Angeli (2025), based on 28 interviews with Greek journalists and academics, explored the perceptions of AI in journalism, focusing on its benefits, risks, and ethical dilemmas. They found that AI adoption in Greek media is still limited, lacking training or strategy, while concerns around bias, transparency, privacy, and copyright remain strong, heightened by the absence of regulation.
Pleios and Tastsoglou (2025) argue for the continued necessity of human oversight, particularly from professional journalists, highlighting six key challenges—data incompleteness, copyright concerns, opacity of sources, agenda-setting and framing, lack of critical perspective, and the relation to fake news—that currently constrain the effective deployment of AI-generated content in news production. They suggest that a comprehensive digital transition strategy is needed to preserve and democratize access to informational resources, potentially through UNESCO-led initiatives. Furthermore, they believe that human oversight remains essential to safeguard ethical standards, ensure the responsible use of AI in news production, and maintain transparency by clearly distinguishing AI-generated from human-generated content.
This study extends the existing scholarship on AI in Greek journalism by offering an empirical contribution that differs in both scope and method from prior works. For example, Kostarella et al. (2025) and Palla and Kostarella (2025) examined local journalists’ perceptions of AI, emphasizing optimism alongside ethical concerns, but their findings were based exclusively on journalists from local media. Kalfeli and Angeli (2025), through in-depth semi-structured interviews, provided rich insights regarding the extent of the use of AI tools by Greek journalists, the modification of work routines, and ethical considerations. Pleios and Tastsoglou (2025) offered a concise overview of the implications of integrating AI-generated content into news production, providing an outline of the most significant aspects of its utilization in journalistic practice. By contrast, our mixed methods design integrates a survey (N = 148) conducted throughout Greece, which systematically maps patterns of attitudes across a broader spectrum of professional journalists, with follow-up interviews (N = 7) that contextualize and explain these patterns in greater depth, trying to explore the reasoning behind them. This dual approach yields a more comprehensive understanding of how AI is perceived in practice, highlighting not only its potential benefits and risks but also concrete professional development needs that have received little attention in earlier research.

2. Materials and Methods

This study employed a mixed methods approach (Plano Clark, 2016), specifically adopting an explanatory sequential design (Ivankova et al., 2006). In this type of design, quantitative data collection and analysis are conducted first, followed by the collection of qualitative data to further explain or elaborate on the quantitative findings. The rationale for this approach lies in the capacity of quantitative data to offer a broad overview of the social phenomenon under investigation, while qualitative data provide deeper insight into the underlying meanings and experiences. In the present study, following the analysis of the quantitative results, semi-structured interviews were conducted to explore the emotions and perceptions of professional journalists regarding the integration of artificial intelligence into their daily professional routines.
Semi-structured interviews are primarily employed in qualitative research when the researcher seeks to balance structure with flexibility. They are particularly valuable in studies where an in-depth understanding of participants’ experiences, perspectives, or attitudes is required, without the constraints imposed by fully structured interviews (Clark et al., 2021; Creswell & Poth, 2016).

2.1. Quantitative Data Analysis

This research was implemented through a structured questionnaire-based survey. The design of the questionnaire was informed—but not confined—by two global surveys: The State of Data Journalism 2021 (Datajournalism.com, 2021, https://datajournalism.com/survey/2021/, accessed on 10 July 2023) and The State of Data Journalism 2022 (Datajournalism.com, 2022, https://datajournalism.com/survey/2022/, accessed on 10 July 2023), conducted by Datajournalism.com (https://datajournalism.com/), an initiative developed by the European Journalism Centre with the support of the Google News Initiative. Although these international surveys provided valuable frameworks, it is noteworthy that Greece was not among the participating countries, leaving a significant gap in the literature concerning the Greek journalistic landscape. To address this gap, the questionnaire was contextually adapted to reflect the specific characteristics of Greek journalism. The instrument was developed and distributed via Google Forms to ensure ease of access for all prospective participants. Data collection took place between September and November 2023, and the questionnaire was initially distributed through the journalists’ unions after having received clearance. Since nowadays, many Greek journalists are not enrolled in unions, thus creating an issue in terms of estimating and finding the real population, the questionnaire was also sent to media outlets (newspapers, radio, TV, and web) to be distributed to their employees; however the media outlets were reluctant in forwarding the questionnaire to their employees, as detected by the researchers. To resolve this issue and to achieve greater participation, the questionnaire was uploaded to the social media platforms of one of the researchers, urging participants to freely forward it to more coworkers. Therefore, homogeneous snow-ball sampling was performed in this investigation, and the reason behind this decision is related to the observed decline in response rates in surveys (especially from journalists) and the ability to collect data through social media since it was difficult to approach individuals.
The questionnaire was structured into five (5) distinct sections. Section 1 sought informed consent from participants regarding the use and publication of the research findings, while Section 2 focused on assessing participants’ familiarity with key terms related to artificial intelligence, their self-reported skill levels, and their personal views concerning the potential benefits, challenges, and opportunities associated with the use of artificial intelligence in specific domains. Most of the items in this section employed a five-point Likert scale, measuring either the frequency of use or levels of agreement. Section 3 gathered information on participants’ professional status, and Section 4 included demographic questions.
The questionnaire is grounded in and draws upon Actor–Network Theory (ANT). Questions on skill levels, regarding areas such as journalism, data analysis, data visualization, statistics, machine learning, and data mining, relate to humans’ competencies, mapping the role of human actors in engaging with technological tools. Questions on AI use, regarding aspects such as faster processing, in-depth analysis, personalization, managing unstructured data, and sentiment analysis, relate to non-human actors whose employment can modify journalistic workflows, decision-making, and content production. Finally, questions on job loss, financial rewards, ethical issues, collaborations, audience engagement, and misinformation relate to network effects, concerning how human and non-human actors co-produce effects.
Data analysis was conducted using Jamovi statistical software (version 2.6.45) (Şahin & Aybek, 2019), a free and open-source platform designed to facilitate accessible and transparent statistical analysis. The analysis included descriptive statistics (Kaur et al., 2018) (such as means, standard deviations, and frequencies) to summarize the key trends and patterns within the dataset. In addition, Cohen’s d was used to measure the effect size between groups, providing an estimate of the magnitude of differences observed. Cohen’s d is a standardized measure that helps interpret practical significance, with values around 0.2 indicating small effects, around 0.5 medium effects, and 0.8 or above large effects (Gignac & Szodorai, 2016).
Prior to the distribution of the questionnaire, a pilot study (Van Teijlingen & Hundley, 2002) was conducted involving five professional journalists, each with more than twenty years of experience in the field. Their feedback offered valuable insights regarding the clarity, relevance, and structure of the questionnaire items. In each case, a member of the research team was present to identify and address any ambiguities or potential shortcomings through direct discussion. Based on the feedback obtained during this preliminary phase, the final version of the questionnaire was subsequently refined and finalized.

2.2. Qualitative Data Analysis

Participants for the qualitative follow-up phase were purposively sampled from the prior quantitative questionnaire study, in accordance with the explanatory sequential design methodology. Specifically, after the initial social media call for quantitative research, a second invitation was issued seeking volunteers; thus there is no exact figure for the number of professionals approached. Nevertheless, we ensured diversity in the sample by including participants from different media sectors and platforms. Accordingly, seven semi-structured interviews were conducted with professional journalists (five women and two men) who had already participated in the initial survey. The professionals were aged from 27 to 55 years old, and their educational level ranged from having a university diploma to a PhD degree. They cover political, sports, and social–cultural issues, while among them, there is an editor-in-chief. Also, they stated that they work in all forms of media, including TV, radio, newspapers, and websites, and besides one of them, the rest work in more than one medium. Prior to their participation in the second phase and after being informed about the procedures and their right to withdraw at any point, with their answers being deleted, they provided their consent.
The aim was to further explore and contextualize the quantitative findings by gaining in-depth insights into participants’ personal experiences, perceptions, and attitudes. The interviews were transcribed verbatim and analyzed using thematic analysis (Vaismoradi et al., 2016), a method that allows for the systematic identification and interpretation of recurring patterns of meaning within qualitative data. The analytic process involved several stages. Initially, the researchers engaged in familiarization with the data through repeated readings of the transcripts. Data were coded using a combination of descriptive and empirically derived codes (Willig & Rogers, 2017). To ensure validity, coding for every research question was reviewed by a second researcher. A high degree of inter-coder agreement was achieved, strengthening the reliability of the findings (Castleberry & Nolen, 2018; Hemmler et al., 2022). Subsequently, line-by-line coding was applied (Williams & Moser, 2019) to capture key concepts, expressions, and patterns relevant to the research questions. These codes were then grouped into preliminary sub-themes, which were refined through iterative comparison and constant data referencing. The final stage involved organizing the sub-themes into six overarching themes, which encapsulated the most salient dimensions of the participants’ narratives. This multi-stage process facilitated a nuanced understanding of journalists’ views on the integration of artificial intelligence in their professional routines, revealing both practical implications and underlying emotional responses.
To protect participants’ anonymity and confidentiality, all identifying information was removed from the dataset. Each participant was assigned a unique alphanumeric code consisting of the letter P followed by a number (e.g., P.1 refers to Participant 1). Only the research team had access to the code list linking identifiers to participants.

2.3. Research Questions

According to the principles of mixed methods research (Creswell et al., 2003) and the explanatory sequential design (Subedi, 2016), there are two main approaches to formulating research questions. The first involves developing a single, overarching research question that encompasses all qualitative and quantitative sub-questions. The second approach entails the separate formulation of research questions for the qualitative phase, the development of hypotheses for the quantitative phase, and, ultimately, a central research question that guides the overall mixed methods inquiry. In the present study, the second approach was adopted, as it aligned with the type of mixed methodology employed, where the two phases of this research were conducted sequentially rather than concurrently.
In parallel, in qualitative research, research questions may initially be broader and become more refined as the research process unfolds (Agee, 2009). More specifically, while the formulation of research questions is essential for conducting empirical investigation, the researcher retains the flexibility to revise and reshape these questions throughout the course of the study. This adaptive process allows the research to remain responsive to emerging data and insights.

2.4. Central Question

What are the perceptions and experiences of journalists regarding the integration of artificial intelligence (AI) in their profession?

2.5. Research Hypothesis

H1. 
Journalists’ self-reported skill levels will be significantly higher in traditional journalistic competencies than in data-centric domains.
H2. 
Journalists’ overall attitudes toward the use of artificial intelligence in journalism will be significantly positive.
H3. 
Journalists’ attitudes toward the combined use of artificial intelligence and big data in journalism will be significantly positive.
H4. 
Journalists with lower self-rated AI skills will express more negative attitudes towards the adoption of AI in journalism.

2.6. Research Questions in Qualitative Research

RQ1. 
What ethical concerns do journalists associate with the use of AI in newsrooms?
RQ2. 
What emotions do journalists experience in response to the introduction of artificial intelligence in the field of journalism?
RQ3. 
How do journalists define the distinct roles and responsibilities of AI versus human journalists?

3. Results

In this section, the findings of the two types of research conducted (quantitative and qualitative) will be presented, beginning with the results of the quantitative research that was conducted first.

3.1. Quantitative Analysis

The sample of the questionnaire consisted of 148 journalists (53.4% male, 45.9% female). The majority held a university diploma (39.9%) or had post-secondary education (33.8%), and most reported working for a single organization (55.4%). The age distribution was relatively even across cohorts from 18 to 55 years old and above (Table 1) with most of the participants being from 45 to 54 years old (33.1%).
This study’s instrumentation incorporated scales derived from The State of Data Journalism surveys fielded in 2021 and 2022. Three primary constructs were measured. First, participants’ professional competence was assessed through two parallel nine-item indices. The first index gauged the self-reported skill level across fundamental domains, including traditional journalism, machine learning, statistics, and data mining, by asking, “How would you describe your skill level in each of the following areas?” Responses were captured on a five-point proficiency scale ranging from Beginner to Expert, with an additional option for “I don’t have this skill.” The second index measured “completed training” within these identical domains using a parallel structure and the same response anchors.
Second, attitudinal measures were evaluated using 16 items divided into two distinct subscales. Nine items probed the perceptions of the “positive impacts” of artificial intelligence in journalism (e.g., “will help reduce human error in the professional process” and “will help me personally to earn more money”), while a separate seven-item subscale assessed apprehensions regarding its “negative consequences” (e.g., “will lead to job losses”, “will raise ethical issues regarding personal data”, “will cause risks of misinformation due to algorithmic rather than journalistic choices”). Finally, a third construct measured attitudes toward technological integration by adapting the nine positive-impact items to specifically evaluate the combined use of artificial intelligence and big data. All attitudinal items were rated on a standard five-point Likert scale from “Strongly Disagree” to “Strongly Agree”. Once the questions were separated by concept, a statistical analysis of Cronbach’s alpha was performed.
Cronbach’s alpha (α) is a fundamental metric for assessing the internal consistency reliability of multi-item scales in quantitative research, with values ranging from 0 to 1. An alpha value of ≥0.70 is generally considered acceptable, indicating that the items within a scale cohesively measure the same underlying construct, while values below this threshold suggest poor item interrelatedness and potential unreliability. Reporting Cronbach’s alpha is critical for validating the robustness of composite scores used in subsequent analyses, as low reliability can undermine the validity of statistical conclusions. For this questionnaire, all scales reported high Cronbach’s a values (Table 2).
Descriptive statistics for self-reported knowledge and skills revealed a notable pattern (Table 3). Participants reported high proficiency in traditional Journalism Skills (M = 4.23; SD = 0.71) and strong conceptual knowledge of artificial intelligence (M = 4.20; SD = 0.78). Knowledge of big data was also reasonably high (M = 3.64; SD = 0.85). In contrast, practical, technical skills were rated significantly lower. Data Analysis Skills were moderate (M = 3.12; SD = 0.92), and Machine Learning Skills were the lowest-rated in terms of competency (M = 2.54; SD = 1.01). This disparity suggests a clear gap between conceptual understanding and practical application among journalists. Moreover, while journalists are conceptually aware of key technological trends transforming their field, a significant skills gap exists in the practical application of data science and AI techniques.
To examine the attitudes of journalists towards the perceived effects of artificial intelligence, the researchers employed one-sample t-tests against a neutral midpoint value of 3 on the Likert scale (Table 4). This analytical choice allowed for the determination of whether the mean scores for each attitudinal construct significantly deviated from a position of theoretical ambivalence. The results revealed that journalists held statistically significant, mildly positive attitudes towards the perceived benefits of AI (M = 3.39; SD = 0.81) and its combination with big data (M = 3.54; SD = 0.79). Conversely, and with a larger effect size, respondents also expressed significant agreement with statements regarding the perceived negative effects of AI (M = 3.60; SD = 0.76). The large Cohen’s d values, particularly for the negative effects scale (d = 0.83), indicate that these departures from neutrality were not only statistically significant but also substantively meaningful. This pattern of findings suggests a nuanced and dual-faceted attitude among journalists, who simultaneously acknowledge both the promising applications and the significant risks associated with the integration of AI into their profession.
A paired-samples t-test was conducted also to evaluate the relationship between journalists’ self-rated technical skill level and their negative attitudes towards AI, regarding aspects such as job losses, negative changes in the practice of the profession, raised ethical issues regarding personal data, risks of misinformation due to algorithmic rather than journalistic choices, and alterations in journalist–audience communication in negative ways. The analysis revealed a statistically significant disparity between these two variables, t (147) = 9.06, p < 0.001. The mean score for negative attitudes towards AI (M = 3.60; SD = 0.73) was substantially higher than the mean for the self-rated skill level (M = 2.81; SD = 0.79). The magnitude of this difference was large (Cohen’s d = 0.74), indicating a robust and meaningful effect.
This finding suggests a critical dissonance within the journalistic cohort. Journalists report a pronounced level of concern about the potential negative consequences of AI, such as job displacement, ethical issues, or a loss of professional control, while simultaneously rating their own competencies in relevant technical areas like data analysis and machine learning as comparatively low. This misalignment can be interpreted through the lens of perceived threat versus perceived preparedness. The high level of concern may stem from a feeling of being unprepared for a technological shift that is perceived as disruptive and inevitable. The significant skill gap identified here may not only hinder the adoption of beneficial AI tools but also exacerbate anxieties about the future of the profession, as journalists might feel they lack the agency or expertise to navigate the changing landscape effectively. This result underscores a pressing need for targeted educational initiatives and training programs designed not only to upskill journalists but also actively manage the apprehensions that accompany technological transformation.
The significant negative attitudes towards AI, juxtaposed with low self-rated skills, create a climate of uncertainty within the profession. This dissonance likely fuels the demand for targeted educational support, a need explicitly voiced by the journalists themselves, as expressed in another study too (Sarrionandia et al., 2025). Their preference for biannual seminars points to a desire for continuous learning rather than one-off interventions, reflecting an understanding that adapting to AI is an ongoing process (Table 5). Addressing this expressed need is not merely a logistical step but a crucial intervention to build confidence, reduce perceived threats, and empower journalists to engage with AI tools effectively and critically.

3.2. Qualitative Analysis

3.2.1. Theme 1: AI as a Tool for Journalistic Enhancement

According to the journalists interviewed in this study, the integration of artificial intelligence (AI) into their profession has brought measurable improvements in both efficiency and news production capabilities. Participants consistently described AI as a critical tool that enables the real-time processing and dissemination of information—capabilities they consider essential in today’s fast-paced digital news environment. Many emphasized how AI accelerates their news production workflows while simultaneously enhancing data analysis capabilities, allowing them to manage large datasets more effectively and deliver more precise, timely reporting. Several interviewees specifically noted AI’s supportive role in journalistic workflows, citing its applications in automated image gathering, transcription services, and audience-tailored content delivery as key factors in creating more personalized and accessible news experiences for their audiences.
P.1: “Artificial intelligence is an important tool in the journalist’s arsenal”.
P.3: “The use of artificial intelligence offers advantages such as speeding up news production, adapting content to platform requirements, enhancing data analysis tools, detecting fake news, assessing the reliability of sources, and identifying valuable content by providing information cross-checking tools”.

3.2.2. Theme 2: Ethical and Quality Concerns

The integration of AI in journalism raises significant ethical and quality concerns, as identified by research participants. A primary issue involves the ethical implications of AI-generated content, particularly regarding algorithmic objectivity and user privacy. Participants expressed apprehension about the potential for AI to facilitate the spread of fake news and compromise the authenticity of journalistic content, which could erode public trust. Additionally, concerns were raised about algorithmic bias reinforcing polarization or inadvertently promoting misinformation through popularity-based content selection.
P.1: “The advancement of artificial intelligence has given rise to ethical concerns, particularly regarding the production of content that, while factually inaccurate, possesses a convincing appearance of authenticity”.
P.4: “The ethical issues include the objectivity of algorithms, the management of misinformation, and the potential exploitation of artificial intelligence for political or commercial manipulation”.

3.2.3. Theme 3: Threats and Job Security

The increasing integration of AI technologies in newsrooms has sparked significant concerns among journalists regarding occupational stability. Multiple studies document pervasive anxieties about job displacement, particularly for roles involving routine news production tasks that are increasingly susceptible to automation. While AI demonstrates remarkable capabilities in data processing and content generation, journalists consistently emphasize its fundamental limitations in replicating human judgment, ethical reasoning, and creative storytelling—core competencies that define quality journalism. Perhaps most critically, participants warn of the potential erosion of the human dimension in news production, where algorithmic efficiency may come at the expense of the nuanced understanding, emotional intelligence, and creative problem-solving that journalists bring to their work. These findings suggest that the journalism profession faces not merely technological disruption but a fundamental renegotiation of its value proposition in the digital age.
P.2: “A fundamental characteristic of journalists is their critical thinking, adaptability, and perceptiveness. These qualities originate exclusively from the human mind, therefore, beyond technical considerations, the human intellect will always remain indispensable in reporting”.
P.4: “The use of artificial intelligence may give rise to concerns regarding job reductions, particularly in sectors involving the automation of content production”.
P.6: “Indeed, I foresee that the use of artificial intelligence will reduce employment opportunities, particularly for individuals without specialized skills.”

3.2.4. Theme 4: Journalists’ Emotions About AI

The integration of AI in journalism evokes complex emotional responses among media professionals, reflecting both optimism and apprehension. Many journalists express enthusiasm about AI’s potential to streamline workflows and generate time savings, with some reporting genuine excitement about its capabilities. However, this optimism is tempered by persistent fears about job security and concerns over growing dependence on algorithmic systems. A strong undercurrent of caution emerges from reports of AI errors and the recognized need for vigilant human oversight. Notably, a significant contingent views AI as a collaborative tool rather than a threat, maintaining optimism about its potential to augment rather than replace human journalism when properly implemented. This emotional duality suggests that newsroom AI adoption is as much a psychological adaptation as it is a technological one.
P.5: “Mixed feelings, encompassing both excitement and concern”.
P.3: “The application of artificial intelligence in a professional context primarily provides me with convenience and a sense of relief”.

3.2.5. Theme 5: Human vs. AI Roles in Journalism

The discourse surrounding AI in journalism consistently reaffirms the indispensable role of human professionals in the news ecosystem. The participants in this study underscore that human intelligence remains irreplaceable for core journalistic functions, particularly in complex decision-making and contextual analysis. Journalists emphasize that quintessentially human qualities, including critical thinking, emotional intelligence, and ethical judgment, constitute the foundation of quality journalism and cannot be replicated by algorithmic systems. The prevailing professional consensus positions AI as a supportive tool rather than a replacement, advocating for technological systems that enhance rather than supplant human journalists’ capabilities. This paradigm suggests an emerging division of labor where AI handles computational tasks, while humans focus on higher-order journalistic responsibilities that require creativity, moral reasoning, and nuanced understanding.
P.3: “Artificial intelligence represents a pivotal instrument for journalists, enhancing their workflow and offering substantial supplementary support, while not supplanting their distinctive narrative voice, critical judgment, or interpretative perspective on issues.”

3.2.6. Theme 6: Journalists’ Perceptions of Audience Shifts and AI’s Role

Participants identified a fundamental shift in news consumption, noting that younger audiences are increasingly skeptical of legacy media and prefer to obtain news from social platforms. They perceive this shift as being driven by a demand for concise, visually engaging content. In response, journalists view AI-powered tools primarily as a necessary adaptation to engage these audiences on their preferred terms, focusing on personalized content delivery to recapture attention.
P.1: “Young people are generally negative towards the media and journalists, so they get their news from social media”.
P.2: “Short and concise videos serve as a major attraction for younger generations”.
P.4: “The application of artificial intelligence facilitates a responsive and personalized news experience, which can be strategically aligned with the consumption habits of younger audiences. Algorithmic systems can support the creation of content that reflects user interests through innovative and interactive formats, thereby promoting novel pathways for news engagement.’’

4. Discussion

AI technologies, like any other form of technology, cannot operate without the intervention of humans. Journalistic work is not an exception, and it is in our firm belief that through the association between human and non-human actants, a solid network is created. We agree with Primo and Zago (2014, p. 49) that “humans and non-humans constitute a hybrid collective” that contribute to journalistic processes. Although journalists seem to consider technology in terms of competitiveness (Wu et al., 2018), within the sociotechnical ecology that is created, both journalists and algorithms hold unavoidable roles (Diakopoulos, 2019) and co-create to provide news in the sense that they are involved at diverse stages of the production of news narratives. ANT, through studying “who and what is involved and how entities connect” (Wiard, 2019, p. 1), provides a theoretical lens to understand journalism as a networked practice.
This study explored how journalists perceive and experience the integration of artificial intelligence (AI) in their profession through a mixed methods design. The authors aimed at translating, besides the participants’ daily routine practices, the notion of journalism itself under the prism of diverse and heterogeneous AI technologies, considering the emerging ethical dilemmas based on professional values and the personal concerns caused by uncertainty.
The quantitative results revealed a clear skills gap: while journalists report high proficiency in traditional competencies and conceptual knowledge of AI, their technical expertise in areas such as machine learning and data analysis remains limited (H1). This aligns with Kostarella et al. (2025), who argue that although Greece is not currently at the forefront of AI adoption and development in newsrooms, practitioners acknowledge its transformative potential for journalistic routines and content. Interviews nuance this finding, showing that journalists experience this gap not only as a practical limitation but also as a source of anxiety about job security and professional autonomy (RQ1), consistent with international findings showing that perceived unpreparedness fuels apprehension toward AI adoption (Fernández-Sánchez et al., 2024). Over time, however, these journalists came to recognize the integration of AI not as a displacement of their work but as a transformation of professional practices, mentioning the need for education and training in AI.
This disparity correlates strongly with heightened negative attitudes toward AI (H4), suggesting that the perceived lack of preparedness fuels anxieties about disruption, job loss, and the loss of professional autonomy. At the same time, journalists recognize the potential benefits of AI, particularly when combined with big data, to enhance efficiency, improve content personalization, and support fact-checking processes (H2, H3). The participants articulate a consistent distinction between AI’s computational strengths and the irreplaceable human qualities, including critical thinking, ethical judgment, and narrative creativity, that define journalism as a public-serving profession (RQ3). AI is embraced as a collaborative partner, expressed via enthusiasm about its potential to streamline workflows and generate time savings. As Schapals and Porlezza (2020) similarly observe, journalists, rather than viewing automation as a threat, perceive it to relieve them of routine, monotonous tasks, thereby enabling greater focus on more meaningful, in-depth stories that they considered resistant to automation.
On the other hand, as derived by the interviews, AI’s growing role in journalism raises ethical and quality concerns, including risks of misinformation, algorithmic bias, and compromised authenticity, which may undermine public trust in news (RQ2), a finding that was also present in the research of Peña-Alonso et al. (2025). However, journalists also emphasized media literacy as a key dimension of AI integration especially when it comes to effectively reaching digital native consumers that favor personalized content to sustain informed civic engagement amidst information fragmentation.
News production consists of dynamic assemblages involving multiple actors where not all participants are journalists. ANT helps to understand the interactions between actors when they collaborate in solving problems (Ryfe, 2022). In addition, there are tasks that are skills-based and more amenable to automation and tasks that are knowledge-based that allow journalists to work more quickly; thus, advances will work in close alignment with core human tasks (Diakopoulos, 2019). Taken together, the research findings underscore that AI in journalism is not merely a technical innovation but a socio-professional transformation. Journalists’ mixed perceptions reflect both optimism about AI’s supportive potential and apprehension about its unintended consequences. Journalists, having shaped the challenges of digital news adoption, can similarly influence the tensions arising from algorithmic automation in newsrooms, which exist in a “grey area” between external pressures and internal autonomy (Danzon-Chambaud & Cornia, 2023).
Importantly, this study highlights the need for ongoing professional development. Training initiatives, rather than being generic, can narrow the technical skills gap while also mitigating fears of obsolescence. They should be directly responsive to journalists’ reported gaps and concerns, such as those related to data analysis and machine learning in our research. The urgent necessity for continuous, structured training in new media practices tailored to journalists’ needs in today’s platform-driven media landscape is also supported by Zervakaki et al. (2025). Furthermore, training in AI literacy could help in understanding and appreciating its normative dimension (Deuze & Beckett, 2022), thus also clarifying the ethical decision-making in AI use in addition to technical understanding (Sarrionandia et al., 2025). By providing journalists with both technical skills and ethical frameworks, newsrooms can mitigate risks, improve content verification, and enhance confidence in AI-assisted workflows (Peña-Alonso et al., 2025; Wu, 2025).
Journalism differs substantially from content creation due to the values connected to the profession and the responsibility related to the public sphere. This, however, does not change the fact that in the current digital world, changes have been made to the media environment to meet the individualistic audience needs following business and market logic by adding technological skillsets. The challenge is not in whether AI will replace journalists but in how the profession will renegotiate roles between human and non-human actors.
As artificial intelligence technologies become increasingly integrated into journalism practice, influencing the profession in various ways, they stimulate new approaches for evaluating their interdependency with human actors (Zamith & Haim, 2020). Journalists in this evolving environment must cultivate new skills to understand and manage these changes while also collaborating with boundary professionals in what has been described as a “fluidly changing journalism profession” (Belair-Gagnon & Holton, 2018).
From an Actor–Network Theory perspective, AI remains a “matter of concern” (Venturini, 2010): its role in journalism is not yet stabilized, and its integration generates both opportunities and controversies. This research examines how AI technologies and the professionals working with them integrate into journalism, highlighting potential pathways for introducing new rules and practices. Such approaches aim to ensure that these technologies and their associated sociocultural influences are not simply absorbed into traditional roles and processes in ways that normalize existing structures, but instead through parametrization, they open possibilities for fundamental transformation while completely respecting the values of the profession. The role of education here is crucial, and an approach to teaching AI in journalism could involve conceptual courses that focus on AI’s strengths, weaknesses, processes, ethical issues, and applications, without requiring coding (Broussard et al., 2019). Ultimately, sustaining journalism’s democratic function in this context depends on renegotiating roles between human and non-human actors while empowering journalists with knowledge, ethical frameworks, and institutional support to shape how these technologies are deployed.
This study offered an exploratory view of how Greek journalists perceive and experience AI, highlighting both enthusiasm for its potential and ambivalence regarding its risks. The limitations of this research are mainly the locality of the sample and the small number of participants along with the reliance on self-reported measures that restricts the generalization of the findings. Future research involving boundary professionals could provide valuable insights into potential forms of collaboration aimed at improving and enriching news production. In addition, studies focusing on audiences could shed light on how they perceive the use of AI technologies in journalistic practice.

Author Contributions

Conceptualization, M.M.; methodology, Z.P. and M.M.; software, Z.P.; validation, Z.P.; formal analysis, Z.P.; investigation, M.M.; data curation, Z.P.; writing—original draft preparation, M.M. and Z.P.; writing—review and editing, M.M. and Z.P.; visualization, Z.P.; supervision, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of The University of Western Macedonia (protocol code 14/2024 and 4/10/2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is available upon request.

Acknowledgments

The authors would like to thank all the participants in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
NLGNatural Language Generation
WAN-IFRAWorld Association of News Publishers

References

  1. Agee, J. (2009). Developing qualitative research questions: A reflective process. International Journal of Qualitative Studies in Education, 22(4), 431–447. [Google Scholar] [CrossRef]
  2. Beckett, C. (2019, November 18). New powers, new responsibilities. A global survey of journalism and artificial intelligence. Polis, London School of Economics and Political Science. Available online: https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/ (accessed on 20 July 2025).
  3. Belair-Gagnon, V., & Holton, A. E. (2018). Boundary work, interloper media, and analytics in newsrooms. Digital Journalism, 6(4), 492–508. [Google Scholar] [CrossRef]
  4. Broussard, M. (2014). Artificial intelligence for investigative reporting: Using an expert system to enhance journalists’ ability to discover original public affairs stories. Digital Journalism, 3(6), 814–831. [Google Scholar] [CrossRef]
  5. Broussard, M., Diakopoulos, N., Guzman, A. L., Abebe, R., Dupagne, M., & Chuan, C. H. (2019). Artificial intelligence and journalism. Journalism & Mass Communication Quarterly, 96(3), 673–695. [Google Scholar] [CrossRef]
  6. Castleberry, A., & Nolen, A. (2018). Thematic analysis of qualitative research data: Is it as easy as it sounds? Currents in Pharmacy Teaching and Learning, 10(6), 807–815. [Google Scholar] [CrossRef]
  7. Clark, T., Foster, L., Bryman, A., & Sloan, L. (2021). Bryman’s social research methods. Oxford university press. [Google Scholar]
  8. Crawford, T. (2020, September 28). Actor-network theory. Oxford research encyclopedia of literature. Available online: https://oxfordre.com/literature/view/10.1093/acrefore/9780190201098.001.0001/acrefore-9780190201098-e-965 (accessed on 30 July 2025).
  9. Creswell, J. W., Plano Clark, V. L., Gutmann, M. L., & Hanson, W. E. (2003). Advanced mixed methods research designs. Handbook of mixed methods in social and behavioral research, 209(240), 209–240. [Google Scholar]
  10. Creswell, J. W., & Poth, C. N. (2016). Qualitative inquiry and research design: Choosing among five approaches. Sage Publications. [Google Scholar]
  11. Danzon-Chambaud, S., & Cornia, A. (2023). Changing or reinforcing the “rules of the game”: A field theory perspective on the impacts of automated journalism on media practitioners. Journalism Practice, 17(2), 174–188. [Google Scholar] [CrossRef]
  12. Datajournalism.com. (2021). The state of data journalism 2021. Available online: https://datajournalism.com/survey/2021/ (accessed on 10 July 2023).
  13. Datajournalism.com. (2022). The state of data journalism 2022. Available online: https://datajournalism.com/survey/2022/ (accessed on 10 July 2023).
  14. de-Lima-Santos, M. F., & Ceron, W. (2021). Artificial intelligence in news media: Current perceptions and future outlook. Journalism and Media, 3(1), 13–26. [Google Scholar] [CrossRef]
  15. Deuze, M., & Beckett, C. (2022). Imagination, algorithms and news: Developing AI literacy for journalism. Digital Journalism, 10(10), 1913–1918. [Google Scholar] [CrossRef]
  16. Diakopoulos, N. (2013, December). Algorithmic accountability reporting: On the investigation of black boxes. Tow Center for Digital Journalism. Available online: https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2 (accessed on 20 July 2025).
  17. Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press. [Google Scholar]
  18. Dierickx, L., Opdahl, A. L., Khan, S. A., Lindén, C. G., & Guerrero Rojas, D. C. (2024). A data-centric approach for ethical and trustworthy AI in journalism. Ethics and Information Technology, 26(4), 64. [Google Scholar] [CrossRef]
  19. Domingo, D., & Wiard, V. (2016). News networks. In The SAGE handbook of digital journalism (pp. 397–409). SAGE Publications Ltd. [Google Scholar] [CrossRef]
  20. Dörr, K. N. (2016). Mapping the field of algorithmic journalism. Digital Journalism, 4(6), 700–722. [Google Scholar] [CrossRef]
  21. Dörr, K. N., & Hollnbuchner, K. (2017). Ethical challenges of algorithmic journalism. Digital Journalism, 5(4), 404–419. [Google Scholar] [CrossRef]
  22. Fernández-Sánchez, A., Lorenzo-Castiñeiras, J. J., & Sánchez-Bello, A. (2024). Navigating the future of pedagogy: The integration of AI tools in developing educational assessment rubrics. European Journal of Education, 60(1), e12826. [Google Scholar] [CrossRef]
  23. Gignac, G. E., & Szodorai, E. T. (2016). Effect size guidelines for individual differences researchers. Personality and Individual Differences, 102, 74–78. [Google Scholar] [CrossRef]
  24. Hemmler, V. L., Kenney, A. W., Langley, S. D., Callahan, C. M., Gubbins, E. J., & Holder, S. (2022). Beyond a coefficient: An interactive process for achieving inter-rater consistency in qualitative coding. Qualitative Research, 22(2), 194–219. [Google Scholar] [CrossRef]
  25. Ivankova, N. V., Creswell, J. W., & Stick, S. L. (2006). Using mixed-methods sequential explanatory design: From theory to practice. Field Methods, 18(1), 3–20. [Google Scholar] [CrossRef]
  26. Kalfeli, P., & Angeli, C. (2025). The intersection of ai, ethics, and journalism: Greek journalists’ and academics’ perspectives. Societies, 15(2), 22. [Google Scholar] [CrossRef]
  27. Karadimitriou, A. (2020). Journalistic professionalism in Greece: Between chronic and acute crises. In The Emerald handbook of digital media in Greece (pp. 159–178). Emerald Publishing Limited. [Google Scholar] [CrossRef]
  28. Kaur, P., Stoltzfus, J., & Yellapu, V. (2018). Descriptive statistics. International Journal of Academic Medicine, 4(1), 60. [Google Scholar] [CrossRef]
  29. Kostarella, I., Saridou, T., Dimoulas, C., & Veglis, A. (2025). Can Artificial Intelligence (AI) Spring an Oasis to the Local News Deserts? Journalism Practice, 1–21. [Google Scholar] [CrossRef]
  30. Kotenidis, E., & Veglis, A. (2021). Algorithmic journalism—Current applications and future perspectives. Journalism and Media, 2(2), 244–257. [Google Scholar] [CrossRef]
  31. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Harvard University Press. [Google Scholar]
  32. Latour, B. (2005). Reassembling the social: An introduction to actor-network theory. Oxford University Press. [Google Scholar]
  33. Law, J. (1999). After ANT: Topology, naming and complexity. The Sociological Review, 47(S1), 1–14. [Google Scholar] [CrossRef]
  34. Lewis, S. C., Guzman, A. L., & Schmidt, T. R. (2019). Automation, journalism, and human–machine communication: Rethinking roles and relationships of humans and machines in news. Digital Journalism, 7(4), 409–427. [Google Scholar] [CrossRef]
  35. Lewis, S. C., & Usher, N. (2016). Trading zones, boundary objects, and the pursuit of news innovation: A case study of journalists and programmers. Convergence, 22(5), 543–560. [Google Scholar] [CrossRef]
  36. Lewis, S. C., & Westlund, O. (2015). Actors, actants, audiences, and activities in cross-media news work: A matrix and a research agenda. Digital Journalism, 3(1), 19–37. [Google Scholar] [CrossRef]
  37. Lindén, C. G. (2020). What makes a reporter human? Questions de Communication, 37(1), 337–351. [Google Scholar] [CrossRef]
  38. Lindén, C.-G., Tuulonen, H., Bäck, A., Diakopoulos, N., Granroth-Wilding, M., Haapanen, L., Leppänen, L., Melin, M., Moring, T., Munezero, M., Sirén-Heikel, S., Södergård, C., & Toivonen, H. (2019). News automation: The rewards, risks and realities of “machine journalism.”. Available online: https://cris.vtt.fi/en/publications/news-automation-the-rewards-risks-and-realities-of-machine-journl/ (accessed on 10 June 2025).
  39. Min, S. J., & Fink, K. (2021). Keeping up with the technologies: Distressed journalistic labor in the pursuit of “shiny” technologies. Journalism Studies, 22(14), 1987–2004. [Google Scholar] [CrossRef]
  40. Napoli, P. M. (2014). Automated media: An institutional theory perspective on algorithmic media production and consumption. Communication Theory, 24(3), 340–360. [Google Scholar] [CrossRef]
  41. Nawararthne, D., & Storni, C. (2023). Black-boxing journalistic chains, an actor-network theory inquiry into journalistic truth. Journalism Studies, 24(13), 1629–1650. [Google Scholar] [CrossRef]
  42. Newman, N. (2024). Journalism, media, and technology trends and predictions 2024. The Reuters Institute for the Study of Journalism. Available online: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2024-01/Newman%20-%20Trends%20and%20Predictions%202024%20FINAL.pdf (accessed on 10 June 2025).
  43. Noain-Sánchez, A. (2022). Addressing the impact of artificial intelligence on journalism: The perception of experts, journalists and academics. Communication & Society, 35(3), 105–121. [Google Scholar] [CrossRef]
  44. Palla, Z., & Kostarella, I. (2025). Journalists’ perspectives on the role of artificial intelligence in enhancing quality journalism in Greek local media. Societies, 15(4), 89. [Google Scholar] [CrossRef]
  45. Papathanassopoulos, S., Karadimitriou, A., Kostopoulos, C., & Archontaki, I. (2021). Greece: Media concentration and independent journalism between austerity and digital disruption. In J. Trappel, & T. Tomaz (Eds.), The media for democracy monitor 2021: How leading news media survive digital transformation (pp. 177–230). Nordicom. [Google Scholar]
  46. Parratt Fernández, S., Rodríguez Pallares, M., & Pérez Serrano, M. J. (2024). Artificial intelligence in journalism: An automated news provider. Index Comunicación, 14(1), 183–205. [Google Scholar] [CrossRef]
  47. Peña-Alonso, U., Peña-Fernández, S., & Meso-Ayerdi, K. (2025). Journalists’ perceptions of artificial intelligence and disinformation risks. Journalism and Media, 6(3), 133. [Google Scholar] [CrossRef]
  48. Plano Clark, V. L. (2016). Mixed methods research. The Journal of Positive Psychology, 12(3), 305–306. [Google Scholar] [CrossRef]
  49. Pleios, G., & Tastsoglou, M. (2025). AI and the news: Challenges arisen from the adoption of ai in news production. Postmodernism Problems, 15(1), 3–24. [Google Scholar] [CrossRef]
  50. Podara, A., & Matsiola, M. (2023). How the Greek television landscape changed during the financial crisis. Journal of Media Business Studies, 20(3), 284–301. [Google Scholar] [CrossRef]
  51. Price, L. T., Clark, M., Papadopoulou, L., & Maniou, T. A. (2024). Southern European press challenges in a time of crisis: A cross-national study of Bulgaria, Cyprus, Greece and Malta. Journalism, 25(11), 2420–2439. [Google Scholar] [CrossRef]
  52. Primo, A., & Zago, G. (2014). Who and what do journalism? An actor-network perspective. Digital Journalism, 3(1), 38–52. [Google Scholar] [CrossRef]
  53. Ryfe, D. (2022). Actor-network theory and digital journalism. Digital Journalism, 10(2), 267–283. [Google Scholar] [CrossRef]
  54. Sarrionandia, B., Peña-Fernández, S., Ángel Pérez-Dasilva, J., & Larrondo-Ureta, A. (2025). Artificial intelligence training in media: Addressing technical and ethical challenges for journalists and media professionals. Frontiers in Communication, 10, 1537918. [Google Scholar] [CrossRef]
  55. Schapals, A. K., & Porlezza, C. (2020). Assistance or resistance? Evaluating the intersection of automated journalism and journalistic role conceptions. Media and Communication, 8(3), 16–26. [Google Scholar] [CrossRef]
  56. Sirén-Heikel, S., Kjellman, M., & Lindén, C. G. (2023). At the crossroads of logics: Automating newswork with artificial intelligence—(Re) defining journalistic logics from the perspective of technologists. Journal of the Association for Information Science and Technology, 74(3), 354–366. [Google Scholar] [CrossRef]
  57. Spyridou, L. P., Matsiola, M., Veglis, A., Kalliris, G., & Dimoulas, C. (2013). Journalism in a state of flux: Journalists as agents of technology innovation and emerging news practices. International Communication Gazette, 75(1), 76–98. [Google Scholar] [CrossRef]
  58. Stalph, F. (2019). Hybrids, materiality, and black boxes: Concepts of actor-network theory in data journalism research. Sociology Compass, 13(11), e12738. [Google Scholar] [CrossRef]
  59. Stray, J. (2021). Making artificial intelligence work for investigative journalism. Digital Journalism, 7(8), 1076–1097. [Google Scholar] [CrossRef]
  60. Subedi, D. (2016). Explanatory sequential mixed method design as the third research community of knowledge claim. American Journal of Educational Research, 4(7), 570–577. [Google Scholar] [CrossRef]
  61. Şahin, M., & Aybek, E. (2019). Jamovi: An easy to use statistical software for the social scientists. International Journal of Assessment Tools in Education, 6(4), 670–692. [Google Scholar] [CrossRef]
  62. Tandoc, E. C., Jr. (2019). Journalism at the periphery. Media and Communication, 7(4), 138–143. [Google Scholar] [CrossRef]
  63. Thäsler-Kordonouri, S., & Koliska, M. (2025). Journalistic agency and power in the era of artificial intelligence. Journalism Practice, 19(10), 2189–2208. [Google Scholar] [CrossRef]
  64. Túñez-López, J. M., Fieiras-Ceide, C., & Vaz-Álvarez, M. (2021). Impact of artificial intelligence on journalism: Transformations in the company, products, contents and professional profile. Communication & Society, 34(1), 177–193. [Google Scholar] [CrossRef]
  65. Vaismoradi, M., Jones, J., Turunen, H., & Snelgrove, S. (2016). Theme development in qualitative content analysis and thematic analysis. Journal of Nursing Education and Practice, 6(5), 100–110. [Google Scholar] [CrossRef]
  66. Van Dalen, A. (2024). Revisiting the algorithms behind the headlines. How journalists respond to professional competition of generative AI. Journalism Practice, 1–18. [Google Scholar] [CrossRef]
  67. Van Teijlingen, E., & Hundley, V. (2002). The importance of pilot studies. Nursing Standard (Through 2013), 16(40), 33. Available online: https://www.proquest.com/scholarly-journals/importance-pilot-studies/docview/219814873/se-2 (accessed on 15 June 2025).
  68. Vatikiotis, P., Maniou, T. A., & Spyridou, P. (2024). Towards the individuated journalistic worker in pandemic times: Reflections from Greece and Cyprus. Journalism, 25(11), 2320–2338. [Google Scholar] [CrossRef]
  69. Venturini, T. (2010). Diving in magma: How to explore controversies with actor-network theory. Public Understanding of Science, 19(3), 258–273. [Google Scholar] [CrossRef]
  70. Wiard, V. (2019, May 23). Actor-network theory and journalism. Oxford Research Encyclopedia of Communication. Available online: https://oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-774 (accessed on 19 August 2025).
  71. Williams, M., & Moser, T. (2019). The art of coding and thematic exploration in qualitative research. International Management Review, 15(1), 45–55. [Google Scholar]
  72. Willig, C., & Rogers, W. S. (Eds.). (2017). The SAGE handbook of qualitative research in psychology. Sage Publications. [Google Scholar]
  73. Wu, S. (2025). What “digital literacies” must journalists have? Unpacking how journalists define and practice news literacy, data literacy, and algorithmic and AI literacy in the digital age. Journalism Studies, 1–18. [Google Scholar] [CrossRef]
  74. Wu, S., Tandoc, E. C., Jr., & Salmon, C. T. (2018). Journalism reconfigured: Assessing human–machine relations and the autonomous power of automation in news production. Journalism Studies, 20(10), 1440–1457. [Google Scholar] [CrossRef]
  75. Zamith, R., & Haim, M. (2020). Algorithmic actants in practice, theory, and method. Media and Communication, 8(3), 1–4. [Google Scholar] [CrossRef]
  76. Zervakaki, V., Papathanassopoulos, S., & Karadimitriou, A. (2025). Re-training Greek journalists on new media practices: Expecting re-employment, yet falling short. Journalism. [Google Scholar] [CrossRef]
Table 1. Research demographics.
Table 1. Research demographics.
GenderPercentageAgePercentage
Male53.4%18–3420.3%
Female45.9%35–4423.6%
Not identified0.7%45–5433.1%
55 and more23%
Educational levelPercentageWorking OrganizationsPercentage
PhD6%None10.1%
Master’s degree14.2%One55.4%
University diploma39.9%Two20.9%
Post-secondary education33.8%More13.5%
Highschool6.1%
Table 2. Reliability analysis Cronbach’s a scores.
Table 2. Reliability analysis Cronbach’s a scores.
ScalesCronbach’s α
Skill Level0.897
Education Level0.861
Perceived Positive Effects of AI0.890
AI and Big Data Combination0.820
Perceived Negative Effects of AI0.861
Table 3. Descriptive statistics for knowledge and skills.
Table 3. Descriptive statistics for knowledge and skills.
VariableMeanSD
Knowledge of AI4.200.78
Knowledge of Big Data3.640.85
Journalism Skills4.230.71
Data Analysis Skills3.120.92
Machine Learning Skills2.541.01
Note: All variables were measured on a 5-point Likert scale.
Table 4. One-sample t-tests for attitudinal measures.
Table 4. One-sample t-tests for attitudinal measures.
ScaleMean (SD)t (df)Cohen’s d
Perceived Positive Effects of AI3.39 (0.81)5.57 (147) *0.458
AI and Big Data Combination3.54 (0.79)8.09 (147) *0.665
Perceived Negative Effects of AI3.60 (0.76)10.1 (147) *0.827
* All three scales show statistically significant mean differences (t-values are large, p < 0.05).
Table 5. Descriptive statistics for seminars and training programs.
Table 5. Descriptive statistics for seminars and training programs.
Frequency of Conducted SeminarsCounts% of Total
Every six months6040.5%
Every year7047.3%
Every three years53.4%
More than three years21.4%
No need117.4%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matsiola, M.; Pilitsidou, Z. Efficiency and Uncertainty: Understanding Journalists’ Attitudes Toward AI Adoption in Greece. Journal. Media 2025, 6, 187. https://doi.org/10.3390/journalmedia6040187

AMA Style

Matsiola M, Pilitsidou Z. Efficiency and Uncertainty: Understanding Journalists’ Attitudes Toward AI Adoption in Greece. Journalism and Media. 2025; 6(4):187. https://doi.org/10.3390/journalmedia6040187

Chicago/Turabian Style

Matsiola, Maria, and Zacharenia Pilitsidou. 2025. "Efficiency and Uncertainty: Understanding Journalists’ Attitudes Toward AI Adoption in Greece" Journalism and Media 6, no. 4: 187. https://doi.org/10.3390/journalmedia6040187

APA Style

Matsiola, M., & Pilitsidou, Z. (2025). Efficiency and Uncertainty: Understanding Journalists’ Attitudes Toward AI Adoption in Greece. Journalism and Media, 6(4), 187. https://doi.org/10.3390/journalmedia6040187

Article Metrics

Back to TopTop