Next Article in Journal
Unmasking the True Self on Social Networking Sites
Previous Article in Journal
Student–Teacher Relationship and Mathematics Achievement: Comparative Insights from Students With and Without Diverse Learning Needs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence in Qualitative Research: Beyond Outsourcing Data Analysis to the Machine

Department of Psychology, Panteion University, 17671 Athens, Greece
Psychol. Int. 2025, 7(3), 78; https://doi.org/10.3390/psycholint7030078
Submission received: 21 July 2025 / Revised: 15 August 2025 / Accepted: 2 September 2025 / Published: 7 September 2025
(This article belongs to the Section Psychometrics and Educational Measurement)

Abstract

This article examines the integration of artificial intelligence (AI) into qualitative psychological research, focusing specifically on AI-assisted data analysis and its epistemological and ethical implications. While recent publications highlight AI’s potential to support analysis, such approaches risk undermining the reflexive, situated, and culturally sensitive foundations of qualitative inquiry. Drawing on relational and social constructionist epistemologies, as well as examining risks inherent in AI technologies, this work critiques the superficial outsourcing of analytical and interpretive processes to AI models. This trend reflects a broader tendency to regard AI as a neutral and objective research tool, rather than as an active participant whose outputs are shaped by, and in turn shape, the social, cultural, and technological contexts in which it operates. An alternative framework is proposed for integrating AI into qualitative inquiry, particularly in psychological research, where data are often sensitive, situated, and ethically complex. A list of best practices is also included and discussed. Key ethical concerns, such as data privacy, related algorithmic affordances, and the need for comprehensive informed consent, are examined. The article concludes with a call to nurture a qualitative research culture that embraces relational and reflective practices alongside a critical and informed use of AI in research.

1. Introduction

In recent years, the rapid rise of generative artificial intelligence (AI) has catalyzed intense debate within the qualitative research community. Large Language Models (LLMs) such as ChatGPT have been promoted for their capacity to accelerate coding and thematic analysis, yet their integration into interpretive work raises critical epistemological, methodological, and ethical questions. Much of the emerging literature focuses on practical demonstrations of AI-assisted coding or on efficiency gains, often framed within a small q, more positivist, orientation to qualitative analysis (Pownall, 2024). Far less attention has been paid to the implications of AI for Big Q traditions, like in interpretive, social constructionist, and critical approaches in which meaning is contextually co-constructed, and where analytical rigor depends on reflexivity, relationality, and situated interpretation (Kidder & Fine, 1987).
Following Snyder (2019) and Grant and Booth (2009), this work is positioned as a conceptual–methodological discussion and state-of-the-art synthesis. This format is suited to emerging research areas where rapid developments and shifting conceptual boundaries render an authoritative systematic review impractical and potentially premature. Drawing on recent literature, epistemological perspectives, and critical commentary, it aims to map an emerging research territory: the integration of AI into qualitative data analysis, with a particular focus on psychological research where data are often sensitive, situated, and ethically complex. This work adopts a critical–reflexive stance informed by relational and social constructionist epistemologies. It treats AI not as a neutral instrument but as a socio-technical actor embedded in wider cultural, institutional, and political dynamics. The analysis is both descriptive, identifying patterns in how AI is currently imagined and used in qualitative research, and normative, proposing principles and practices for aligning AI adoption with the values and commitments of interpretive inquiry. In this way, this paper functions as a methodological provocation and a call for epistemologically informed, value-driven integration of AI into qualitative research.
The article’s contributions are twofold. First, it proposes an alternative framework for engaging AI in qualitative inquiry that resists extractive logics and instead cultivates relational, reflexive, and abductive uses of AI as a heuristic partner. Second, it offers a set of best-practice guidelines for qualitative researchers navigating the opportunities and risks of AI integration, with particular attention to ethical and epistemic integrity in psychologically sensitive contexts.
To orient the reader, this paper is organized as follows. Section 2 examines the current trend toward outsourcing thematic data analysis to AI, outlining the methodological and practical challenges this raises. Section 3 situates these developments within broader epistemological traditions, contrasting extractive and relational logics of inquiry. Section 4 discusses the opacity of AI outputs and the importance of preserving embedded histories in qualitative analysis. Section 5 considers the role of human interpretation against algorithmic “perfectionism” during data analysis, also highlighting the importance of data collection/production. Section 6 addresses ethical concerns, especially informed consent in psychological research with AI. Section 7 presents a curated list of best practices for reflexive, value-driven AI use. Section 8 follows, discussing the limitations of this work. The paper concludes with reflections (Section 9) on the relational ethos of qualitative inquiry and the choices that will shape its future in the era of AI.

2. Outsourcing Thematic Analysis: Methodological and Practical Challenges

Thematic analysis is a widely used method for analyzing qualitative empirical data, allowing researchers to identify, organize, and interpret patterns, or themes, within a given data corpus (Aronson, 1995; Braun & Clarke, 2006; Lochmiller, 2021; Morgan, 2022). Rooted in reflexive and context-sensitive epistemologies, it emphasizes interpretive engagement rather than mechanical coding (Braun et al., 2019). The growing use of AI to facilitate aspects of thematic analysis raises questions about how such tools align with these foundations, particularly regarding the preservation of nuance, reflexivity, and researcher subjectivity. Historically, this process has relied exclusively on deep human engagement and iterative intellectual effort. The process of inductive thematic analysis typically begins with familiarization, where the researcher immerses themselves in the data, reading and re-reading transcripts or texts to gain a deep understanding. Next, initial codes are generated by systematically tagging data segments that appear meaningful, significant or interesting to the researcher (Chenail, 2012). These codes are revised, and similar ones are grouped together to form wider categories, or themes, at a higher level of abstraction. Cycles of reviewing and refining the themes are repeated to ensure they “accurately” reflect the data. Some themes may be merged, split, or discarded as necessary. The final stage involves reporting the findings by selecting characteristic quotations from the raw empirical data (usually interviews or focus group transcripts, or other textual sources) to illustrate each theme, along with a detailed narrative that interprets and explains the significance of the theme in relation to the research question (Lochmiller, 2021). This process is quite labor-intensive, requiring researchers to navigate the uncertainty and ambiguity of identifying patterns of meaning within vast amounts of textual data, deciding where to focus their attention and to which segments to assign what codes. Empirical data are often messy, filled with overlapping, culturally dependent, and context-specific meanings. The rigor of the process is grounded solely in the researcher’s situated, and inevitably subjective, interpretation and reflexive engagement.
Today, there is a noticeable surge in articles discussing how artificial intelligence can facilitate qualitative inquiry, and especially the coding and thematic analysis of large, or even smaller, corpora of qualitative empirical data, making the process less labor-intensive, more automatic, and possibly more “neutral”, and thus perceived as more objective and trustworthy (Bryda & Sadowski, 2024; Christou, 2023, 2024; Chubb, 2023; De Paoli, 2024; Hitch, 2024; Kasperiuniene & Mazeikiene, 2024; Perkins & Roe, 2024). However, such claims of neutrality are problematic, as they obscure the ways AI systems embed the biases and assumptions present in their training data and design. Far from being objective observers, AI models are active mediators whose outputs reflect particular social, cultural, and technological contexts (McQuillan, 2022). Therefore, treating their analyses as neutral risks reinforcing dominant perspectives and marginalizing less common or context-specific meanings. Nevertheless, this notion of neutrality is particularly problematic within constructivist and second-order cybernetics qualitative research epistemologies (Brailas, 2025; Gergen, 2009), where the researcher is understood as an active participant in the unfolding of the events and in meaning-making. In these traditions, the researcher’s engagement, shaped by their cultural, social, and relational positioning, is not treated as a source of bias to be eliminated but as a valuable resource for generating rich, situated understandings (Camargo-Borges & McNamee, 2022). By contrast, framing AI as a “neutral” analyst risks devaluing the interpretive, contextually embedded contributions that such epistemologies regard as central to the research process.
Paulus et al. (2025) studied related published articles and identified many dominant narratives regarding the emerging construction of the role of AI in qualitative research. Pattyn (2024) found that popular generative AI tools such as ChatGPT and Gemini completed categorization tasks with four times less effort and fifteen times faster throughput compared to human coders. These tools also showed, according to the same study, higher inter-coder reliability, with Cohen’s Kappa scores exceeding those of both expert and non-expert human coder conditions. While these figures are drawn from a single study and should not be taken as universally applicable, they are noteworthy for illustrating a broader trend: the growing emphasis on productivity, speed, and measurable performance in qualitative research. Such metrics, while appealing, risk prioritizing efficiency and output quantity over the depth, reflexivity, and contextual richness that qualitative inquiry has sought to preserve until now.
While the recent interest in using AI for thematic data analysis reflects a pragmatic turn toward speed and efficiency, this trajectory risks reducing qualitative inquiry to a process of confirming what we already know. Generative AI models, such as ChatGPT, are trained to optimize for statistical likelihood, not for conceptual novelty. More specifically, LLMs generate each word in a sequence by selecting the most probable continuation based on patterns learned from vast training datasets (McQuillan, 2022). This probabilistic approach tends to reproduce dominant or frequently occurring expressions and associations, which can limit the emergence of less common, unexpected, or conceptually innovative themes in qualitative analysis. The goal of LLMs is to reproduce what is expected. As such, they often act less like researchers and more like model students, echoing the most probable response drawn from a corpus of prior texts (Dröge, 2025; Wolf, 2025). This behavior has been referred as the “proving the obvious” problem (Dröge, 2025). When AI produces thematic summaries that mirror textbook definitions or predictable insights, for example, it may give the illusion of analytical depth (Dröge, 2025). In reality, it reflects common-sense generalizations already embedded in its training data corpus, and often smooths over complexity to produce central tendencies (Nicmanis & Spurrier, 2025). This outcome should not surprise us. Large Language Models (LLMs), in their current state of development, are machines of mimicry. They try to provide the most expected answer. They do not seek anomaly, contradiction, or the disruptive moment that sparks fresh thinking.
In qualitative research the purpose is not to mirror common sense but to trouble it. The richness of qualitative empirical data lies in what resists categorization, the awkward quote, the emotional turn, the silence that follows a sensitive question (Dröge, 2025). These moments invite abductive reasoning (Bignami et al., 2023). They shift our attention from what is typical to what is strange, from pattern recognition to meaning reconstruction. On the other hand, with current generative AI algorithms, irregular, ambiguous, or contextually unusual data points tend to be treated as statistical outliers, which the model either omits, paraphrases into more “normal” language, or frames in ways consistent with common patterns (McQuillan, 2022). Used passively, AI systems tend to flatten the edges. They privilege what is most often said rather than what matters most (Dröge, 2025; McQuillan, 2022).
But this limitation also presents an opportunity. Rather than asking AI what the data says, researchers might instead ask: what does the AI expect the data to say, and how do the actual accounts diverge from that expectation? This two-step prompting, drawn from grounded theory’s technique of contrasting comparisons, leverages the LLM’s predictive bias as a foil for discovery (Dröge, 2025). By juxtaposing expected insights with surprising deviations, the AI becomes a heuristic device for identifying novelty. It no longer acts as an authoritative analyst but as a conversational partner in the pursuit of the unexpected (Brailas, 2024; Dröge, 2025).
However, to realize this potential, researchers must remain deeply engaged with their data. Delegating interpretation to the model risks outsourcing not just labor, but also epistemic responsibility (Von Foerster, 2003). AI can assist with organizing, comparing, and even surfacing contradictions, but it cannot substitute for the reflexive work of meaning-making. The value of qualitative research lies in its attention to context, ambiguity, and the subjective lived experience. These are not statistical artifacts. They require presence. Thus, the challenge is not simply to integrate AI into qualitative workflows, but to do so in a way that resists epistemological complacency. This requires crafting prompts that frame curiosity rather than closure, and remaining alert to the moment when the machine tells us what we already know. In those moments, the question becomes: what is being left unsaid, and why?
The tendency of AI systems to reproduce patterns and confirm expectations reflects a broader tension between rule-following and creativity in knowledge production. Dyson (2006) argued that scientific progress depends on those who resist the comfort of established rules and instead embrace the uncertainty of exploration. Similarly, qualitative researchers who rely too heavily on AI’s probabilistic outputs risk relinquishing the improvisational and critical capacities central to their craft. For example, because LLMs are trained to predict the most statistically probable next word, they are inherently predisposed to reproduce familiar expressions and conventional associations rather than generate unexpected connections. As Dröge (2025) vividly demonstrates through specific examples, this default toward the predictable can narrow the interpretive space, making it less likely for genuinely novel insights to emerge, precisely the kind of insight that creative inquiry in qualitative research often seeks. While structure and method remain indispensable, they are not sufficient (Dyson, 2006). Innovation arises when researchers interrogate prevailing categories, question dominant narratives, and attend to the anomalies in their data, tasks that require more than algorithmic compliance.
From a dialectical perspective, qualitative analysis is not merely about mapping existing patterns but about engaging with the generative contradictions that drive transformation (Dafermos, 2018, 2020). The inherent tendency of generative AI algorithms to produce the most anticipated and common answers risks overlooking the critical contradictions that are fundamental to knowledge development. Researchers, therefore, might design prompts or analyses that deliberately seek out internal tensions, anomalies, and unresolved contradictions, treating these not as unwanted noise but as sites of insight and potential synthesis.

3. Resisting Extractive Logics: Toward an Alternative Engagement with AI

Qualitative research encompasses two broad traditions, which Kidder and Fine (1987) term the small q and the Big Q. The small q refers to the use of qualitative methods within predominantly positivist or post-positivist frameworks, emphasizing structure, measurement, and objectivity, often in the search for patterns in large datasets using approaches similar to content analysis or more positivist forms of thematic analysis (Braun & Clarke, 2019). In contrast, the Big Q refers to qualitative research grounded in interpretivist, constructivist, or critical paradigms, which prioritize subjectivity, context, and reflexivity. Big Q research employs flexible, exploratory methods that foreground the researcher’s interpretive role and treat knowledge as situated and co-constructed (Braun & Clarke, 2019; Kidder & Fine, 1987).
Much of the current discourse around AI assumes an extractive paradigm: people treat generative AI algorithms as neutral instruments designed to efficiently process and summarize data. This orientation reflects a broader logic of modernity, one that prioritizes speed, efficiency, and predictability over relational richness and ambiguity (Capra & Luisi, 2014). As Andreotti (2024) observes, such logics “erode the potential for wonder, replacing it with the need for answers” (p. 6). When researchers approach AI primarily as a vending machine for solutions, they risk replicating the same transactional dynamics that qualitative inquiry, at least in the Big Q tradition, often seeks to critique.
Recent empirical work supports the need to resist extractive logics in AI use. Marshall and Naff (2024) found that while qualitative researchers are generally open to using AI for transcription, they remain hesitant to apply it for full coding, theme development, or writing. Many perceive such uses as undermining the interpretive and relational commitments of qualitative inquiry, raising concerns about over-simplification of the analytical process. For instance, using AI to generate codebooks for deeply interpretive work may create a mismatch between method and paradigm. This results in what Braun and Clarke (2021) describe as “confused q” analysis, mixing positivist and non-positivist assumptions without theoretical justification. Nicmanis and Spurrier (2025) stress that qualitative research must align its conceptual framework, methodology, and methods. AI tools should be integrated only after clarifying how they support or constrain the production of knowledge within a given research tradition.
These mismatches often stem from unexamined epistemological assumptions embedded in AI tools. As Lyons (2007) outlines, qualitative approaches in psychology span a continuum, from naïve realism to radical constructionism, each with different ontological and methodological expectations. For instance, grounded theory, especially in its earlier formulations, assumes that meaning can be “discovered” in the data, aligning with a more realist stance. In contrast, discourse analysis adopts a radical constructionist view, seeing language as constructing rather than reflecting reality. AI systems like LLMs, which generate statistically probable responses, often align with naïve realist logics: they treat patterns as discoverable and meaning as stable. Integrating such tools into contextual or radical constructionist traditions without adaptation risks epistemological incoherence. Researchers must therefore be explicit about the philosophical assumptions guiding their analytical practices and ensure that the use of AI aligns with the epistemic commitments of the chosen qualitative method.
Qualitative research, especially in the Big Q tradition, has long emphasized the importance of relationality: an openness to context, ambiguity, and co-construction of meaning. When interactions with AI become purely extractive, this relational field contracts. For example, a researcher might upload interview transcripts into an AI tool and request a set of concise themes without engaging with the data themselves or reflecting on the contextual and relational dynamics behind participants’ words. In this purely extractive approach, the AI becomes a mechanism for harvesting decontextualized outputs, with minimal human interpretive involvement. Andreotti (2024) warns that the pursuit of optimization, designing prompts to maximize engagement or producing “efficient” thematic summaries, reduces relationships to transactions, mirroring the way modernity treats both people and the Earth as resources to be mined. This pattern undermines the epistemic foundations of qualitative inquiry, which value subjectivity, situatedness, and reflexivity (Kidder & Fine, 1987).
To counter this trend, researchers can reimagine their engagement with AI through what Andreotti (2024) calls relational prompt engineering. This practice resists the urge to extract and instead treats AI interactions as invitations to co-create meaning. In this sense, AI is not a detached instrument but an intra-active participant in the co-construction of meaning, as Barad (2006) argues. The researcher–AI–data triad does not merely interact but mutually constitutes itself through situated entanglement. This perspective challenges the notion of AI as an external, objective tool and instead positions it as part of the emergent relational confluence (Gergen, 2009) that qualitative inquiry inhabits. Relational prompt engineering can be strengthened through prompt science (Shah, 2025), a human-in-the-loop process that approaches prompt creation much like qualitative codebook development. In this approach, researchers collaboratively define criteria and refine prompts through structured dialogue, ensuring rigor, reducing bias, and maintaining transparency. This might involve crafting prompts that invite ambiguity and exploring unexpected connections in the data (Andreotti, 2024; Dröge, 2025). For example, rather than asking the model simply to identify and list dominant themes, researchers might ask how the model’s interpretation diverges from their own or what contradictions and silences it notices in the dataset. This aligns with qualitative traditions of abductive reasoning and critical reflexivity within a social constructionist epistemology.
In this direction, the growing integration of AI into qualitative research should be understood as part of a broader trajectory in which complex technosocial systems evolve through feedback and improvisation (Capra & Luisi, 2014). Qualitative researchers might approach AI as one more coevolving actor in a dynamic system where the questioning itself serves as a generative act. Each question a system asks of itself opens new layers of functional information, catalyzing further complexity and emergent meaning (Wong et al., 2023). In qualitative inquiry, researchers who engage AI not merely as a tool for answering pre-defined questions but as a partner in generating novel questions can catalyze the understanding of unanticipated insights (Brailas, 2024). Many studies have explored the potential of artificial intelligence chatbots as virtual interlocutors for humans to engage in reflective dialogue with (Adesso, 2023; Brailas, 2024; Raile, 2024; Sirisathitkul, 2023). In this way, LLMs can serve not only as “neutral” coding tools but as reflexive collaborators. In Big Q research, AI need not replace human interpretation. Instead, it can augment reflexive practices by offering pattern-based suggestions that prompt critical thinking. Hitch (2024) and Nicmanis and Spurrier (2025) describe using ChatGPT to engage in analytic dialogue, using AI not to determine categories, but to uncover ambiguities, contradictions, or overlooked meanings in data. This reframes AI as a dialogic relational partner that enriches the analytical process rather than an automated engine of truth.

4. Complexity, Embedded Histories, and the Opaqueness of AI Outputs

The discourse on AI risk often focuses on decisive, high-magnitude events, the so-called “takeover” scenarios, in which a misaligned superintelligence abruptly ends human life on the planet. Yet such a framing risks neglecting a more insidious threat: the gradual erosion of systemic resilience through the accumulation of smaller, interrelated disruptions. Kasirzadeh (2025) contrasts these pathways, highlighting how existential risk can emerge not only through decisive failure but also through the compounded effects of social, political, and economic harms that undermine critical infrastructures over time. Qualitative researchers working with AI are situated within, and contribute to, these broader dynamics. Each design choice, each methodological shortcut, and each overreliance on opaque algorithmic systems can weaken the normative and epistemic foundations of scholarship, reinforcing positivist extractive logics. These seemingly minor compromises can interact, creating positive feedback loops that degrade trust, transparency, and equity in knowledge production. As Kasirzadeh (2025) argues, such accumulative risks resemble climate change: incremental and diffuse, yet no less catastrophic when thresholds are crossed.
Assembly theory emphasizes that complex entities are more than mere aggregations of parts; they carry within them the memory of the paths that brought them into being (Sharma et al., 2023). Each object’s complexity reflects not only its present configuration but also the improbable sequence of selections and contingencies that made it possible. In qualitative inquiry, this insight resonates strongly. A narrative, a theme, or a pattern in qualitative data emerges not spontaneously but through the entangled histories of participants’ experiences, social contexts, and the researcher’s interpretive choices. To treat a theme as if it were an isolated fact, as something that simply is, rather than something that became, risks severing it from the very process that gives it meaning.
The integration of LLMs into qualitative research sharpens this concern. While LLMs can rapidly generate plausible themes and summaries, their outputs are opaque: the pathways by which an LLM arrives at a given thematic suggestion remain hidden (McQuillan, 2022). Despite the serious ongoing efforts to improve explainability and reduce algorithmic opacity (Bilal et al., 2025; Lindsey et al., 2025), such methods remain limited in their ability to account for the full complexity of generative models in applied qualitative contexts. This creates a paradox. If one accepts an LLM’s output at face value, the embedded history of its becoming is unknowable. The researcher cannot discern whether the suggested theme arises from the participants’ narratives, from statistical artifacts of the training corpus, or from patterns introduced during prompt engineering (McQuillan, 2022). In effect, the LLM’s contribution lacks the reflexive accountability that qualitative research demands. As a result, researchers who adopt LLMs without attending to this opacity, risk importing into their analyses a history they cannot interrogate, one that may be misaligned with the data’s own embedded histories of meaning. Instead, qualitative researchers should remain attentive to the becoming of their findings. The themes proposed by an LLM cannot be treated as autonomous truths but must be situated within the relational, historical pathways of the data, the participants, and the researcher’s interpretive process. Recognizing and preserving this embedded history is central to the integrity of qualitative inquiry, and it requires that AI be used not as an unquestioned authority but as a provisional interlocutor within a transparent and reflexive analytic practice.
Table 1 offers a concise overview of the major qualitative data analysis approaches, summarizing their key epistemological stances along with perceived advantages and disadvantages. It compares human-only analysis, AI-assisted analysis within the small q tradition, and AI-assisted analysis within the Big Q tradition. The perceived advantages and disadvantages are contingent upon the epistemological stance of each researcher.

5. The Human Element Against Algorithmic Perfectionism in Data Analysis and Production

In qualitative research, the historical and cultural context, the situated richness of the data, and the subjective interpretations a researcher brings to the process are essential, as they align with the foundational epistemologies of the field (Creswell & Poth, 2025). Often grounded in social constructionism, this paradigm views reality as co-constructed through human interaction and language, requiring sensitivity to the cultural–historical and contextual nuances in which social phenomena occur (Gergen, 2009; Shotter, 2012). Second-order cybernetics further highlight the dynamic, reciprocal relationships between researcher and participants, where psychological phenomena emerge through specific intra-actions with the research apparatus and knowledge is co-created rather than simply discovered (Brailas & Papachristopoulos, 2023; Heron & Reason, 2008; Sandle et al., 2024).
Marshall and Naff (2024) report that researchers view the removal of the human element as the most significant risk of AI integration in qualitative research. Second-order cybernetics and transformative action theories further stress that researchers themselves are active participants in the system they are studying, influencing and being influenced by the context (Von Foerster, 1984, 2003). It is about how to “act as participants in the lives of others in order to understand them.” (Brinkmann, 2024, p. 15). This embeddedness allows researchers to capture not just data, but the lived experiences, power dynamics, and historical conditions that shape the subjects’ perspectives and actions. Without this interpretive lens, qualitative research risks becoming a detached, decontextualized exercise, unable to fully engage with the complexities of human experience and the socio-cultural environments that give those experiences meaning. Thus, the human researcher’s subjective viewpoint is not a limitation but a crucial asset that ensures qualitative research remains faithful to its theoretical foundations of understanding reality as complex, situated, and contextually contingent.
A parallel methodological approach that prioritizes the human and collective element is Nomadic Thematic Analysis (Brailas & Papachristopoulos, 2023), which reconceptualizes data analysis as a community-based, dialogic, and emergent process. Rather than collecting data to later be analyzed in isolation, whether by humans or algorithms, this approach embeds analysis within the very fabric of collective interaction. Participants move through a staged process: from individual reflection, to dyadic sharing, to small group synthesis, and finally to full-group co-construction of themes. This gradual scaffolding does not simply aggregate perspectives; it transforms them through structured relational dynamics. What emerges is not just a set of themes, but an evolving research collective, a rhizomatic research assemblage, whose analytical capacity is inherently social, embodied, and situated. By integrating these insights into AI-assisted qualitative research, we may begin to treat both AI and human contributions as entangled parts of an emergent system of inquiry (Brailas, 2024), rather than discrete sources of meaning. Here, AI might support, but not substitute, these community processes, perhaps by identifying divergence between expected and emergent group themes, or suggesting unexpected connections. Crucially, it is not the AI’s efficiency but the community’s reflexivity that anchors analytical depth.
The human researcher’s fundamental and critical role in qualitative research begins well before the data analysis phase. Data collection, or rather data production within constructivist epistemologies (Mason, 2017), is a deeply relational process in qualitative research, where meaning emerges through the dynamic interplay between researcher and participant. Therefore, no level of analytical sophistication can compensate for superficial data collection. Although this may not seem directly related to the data analysis phase, which in a linear conception of the research process typically follows the data collection phase, it matters most. The rapidly expanding literature on AI-assisted qualitative data analysis may unintentionally overshadow the importance of producing high-quality qualitative data. Generative AI algorithms appear promising, with an implicit mandate: “Feed me textual data, regardless of volume, and I will make, or help you make, sense of it; I will recognize patterns in the corpus.” This mentality risks obscuring the value of qualitative inquiry as a transformative and generative tradition in which participants and researchers co-create meaning, construct realities, and (re)imagine alternative futures. The quality of any subsequent thematic data analysis, whether only human-led or AI-assisted, ultimately depends on the depth and richness of the data corpus.
A final way to reinforce the human element is to make data analysis a collective, rather than an isolated, process. To deepen both the credibility and richness of qualitative analysis, researchers should consider forming or facilitating research collectives during the analytic process, rather than analyzing data in isolation. This practice enhances credibility through triangulation, peer debriefing, and reflexivity, all core components of trustworthiness (Ahmed, 2024). Importantly, it also helps mitigate the epistemic flattening often caused by generative AI’s tendency to replicate dominant narratives. Rather than relying on AI to produce themes independently, researchers can engage it as a dialogic prompt or heuristic device, surfacing contradictions or proposing alternative framings that the collective can then integrate, extend, or contest (Brailas & Papachristopoulos, 2023). In this way, AI becomes a mediating research tool rather than an outsourcing device, while the community itself becomes the method: a dynamic assemblage that ensures analytical rigor through shared interpretation and reflexive dialogue.

6. Ethical Considerations, Informed Consent, and Qualitative Research in Psychology

The integration of generative AI into qualitative data analysis introduces critical ethical imperatives, particularly concerning data privacy, ownership, algorithmic transparency, and the fundamental principles of informed consent. Researchers must recognize that submitting sensitive interview transcripts to commercial AI models, even with assurances of non-training use, raises fundamental questions about data control and potential re-identification risks (Nguyen-Trung, 2025). For example, a recent study (Rocher et al., 2019) estimated that as few as 15 demographic attributes are sufficient to correctly re-identify over 99% of individuals in a given dataset, highlighting that even seemingly anonymized qualitative data, which inherently contains many contextual details (Lamb et al., 2024), can pose a significant risk when exposed to sophisticated algorithmic analysis.
The inherent opacity of many AI algorithms further complicates accountability, making it challenging to fully ascertain how participant data is processed or if unintended biases are introduced. This demands a proactive and explicit approach to ethical governance, moving beyond mere regulatory compliance to cultivate genuine researcher’s respect for participant data and the integrity of the research process.
Therefore, securing comprehensive informed consent from participants is paramount, explicitly detailing the use of AI in data analysis. This includes clearly outlining the specific AI tools that will process their data, the stages of analysis where AI will be employed, and the measures taken to protect privacy, such as anonymizing data prior to submission (Nguyen-Trung, 2025). This level of detail is crucial because, unlike quantitative data, qualitative data, particularly interview transcripts, often contains uniquely personal accounts that are difficult to fully anonymize (Lamb et al., 2024). Context, phrasing, language use, or specific personal situations can inadvertently identify participants, even when overt identifiers are removed. Therefore, researchers must anticipate and address these identification risks in their consent processes and ethics applications.
Furthermore, the evolving nature of legal and political contexts presents additional ethical challenges for qualitative data sharing and AI integration. Transcripts about contentious issues, while legal at the time of interview, could become problematic if the legal or social landscape shifts (Lamb et al., 2024). The long-term implications of AI processing on the potential for re-identification or the unintended exposure of sensitive information must be thoroughly assessed and communicated to participants, ensuring their ongoing trust and autonomy.
Beyond privacy, there are epistemic and political implications tied to representation. As Lyons (2007) discusses, qualitative researchers in psychology often struggle with the question of how to authentically and ethically represent participants, particularly those from marginalized or vulnerable groups. When generative AI is introduced into the analysis process, especially when trained on dominant discourses, it risks perpetuating stereotypical narratives or muting the voices of those constructed as the “Other”. This introduces a new layer of “epistemic misrepresentation” where participant accounts may be subtly reinterpreted through algorithmic logics not grounded in the original context. Researchers must ask: whose voice is amplified, whose is diminished, and what assumptions does the AI bring to bear?
Using established commercial or open-source Qualitative Data Analysis (QDA) software with generative AI capabilities does not, in itself, resolve ethical dilemmas. Researchers must recognize that the use of reputable software platforms does not absolve them of their individual ethical responsibilities. While such software might provide an appearance of legitimacy and ethical soundness, it does not fundamentally alter the underlying issues of submitting sensitive data to opaque algorithms, the potential for re-identification, or algorithmic bias. Researchers might find it easier to state in consent forms that “specialized research software with AI capabilities” will analyze data, potentially downplaying the specific implications of generative AI use. This approach, however, risks obscuring the true nature of data processing from participants and the research community. The qualitative research community faces an ongoing challenge to develop ethical guidelines that genuinely address these complexities, rather than merely relying on the supposed authority of software platforms.
Qualitative researchers also need to develop specific AI research literacy. If reality emerges from our relationships, then treating data as a mere commodity, processed by an opaque system, severs the very threads of that relationality. It becomes a form of violence against the integrity of the system. To say “the machine analyzed it”, without understanding how the machine analyzed it, is to abdicate responsibility (Von Foerster, 2003). And to ask for consent without truly informing, without the researcher themselves understanding the complexities of the machine’s inner workings, is a deception. Ultimately, the ethical use of AI in qualitative research hinges on a commitment to participant well-being, honoring the researcher–participant relationship, and the researcher’s reflexive, responsible stewardship of the interpretive process.
The issue of maintaining privacy and avoiding implicit profiling or re-identification is especially critical in psychological qualitative research on sensitive topics and with vulnerable populations. For example, if participants in a study are refugee women with histories of domestic violence, currently residing in a shelter, it raises serious ethical concerns to upload their interview transcripts, potentially containing deeply personal narratives and trauma disclosures, into ChatGPT or even into specialized QDA software with AI capabilities. Even when informed consent has been formally obtained, it is worth asking whether both participants and researchers truly understood the actual capabilities of generative AI algorithms and the potential risks associated with their use. In such cases, a more ethically sound alternative might be to process only selected parts of the transcripts, those that do not contain potential indirect identifying information, through AI tools. Conversely, in less sensitive studies, such as those exploring young adults’ experiences of romantic or sexual relationships, indirect profiling may pose minimal risk, depending always on the specific research design and data collected. Thus, with highly sensitive qualitative psychological data, the use of AI in analysis should be either avoided altogether or approached with extreme caution. This remains a moving landscape: AI ethics, governance protocols, terms of use, and the development of custom or privacy-enhanced QDA adaptations of generative AI are all in flux. The critical takeaway is the need for continuously evolving technological literacy among qualitative researchers, particularly those working in psychological contexts, regarding AI’s special affordances and limitations.

7. Compiling a List of Best Practices for Qualitative Inquiry in the Era of Generative AI

The integration of AI into qualitative research raises important methodological, epistemological, and ethical questions. While AI offers efficiency and scalability, its use risks flattening the interpretive and relational dimensions of qualitative inquiry if treated as a neutral, extractive tool. Researchers must therefore engage AI critically, recognizing it as an actor embedded in complex techno-social systems and power hierarchies, rather than as a compliant instrument. Based on the preceding discussion, the following good practices are intended to support qualitative researchers, especially within the Big Q tradition, in incorporating AI into reflective thematic analysis in ways that preserve the situated, reflexive, and relational commitments of qualitative research.

7.1. Adjust the Process of Receiving Informed Consent Accordingly

Develop and implement an enhanced informed consent process that explicitly details the use of AI in data analysis, including the specific models, data handling protocols, and potential risks to privacy and confidentiality. Researchers must cultivate a deep literacy regarding AI’s capabilities, limitations, and ethical implications to effectively communicate these aspects to participants and ensure genuinely informed consent. This proactive ethical stance is crucial for maintaining trust and integrity in AI-assisted qualitative research.

7.2. Do Not Forget to Prioritize Data Production/Collection

Recognize that qualitative data collection is a deeply relational process where meaning emerges through the dynamic interplay between the researcher and participant. No level of analytical sophistication can compensate for superficial data collection, and the quality of any subsequent analysis ultimately depends on the depth and richness of the data corpus.

7.3. Use AI to Enhance and Enrich the Analytical Process, Not to Outsource

Treat AI as a heuristic device to surface patterns and contradictions, rather than a replacement for interpretive labor. Use AI-generated outputs as provisional points of comparison with your manual analysis. Critically interrogate where the AI’s suggestions align with, diverge from, or contradict your own findings. Remember that qualitative analysis is an embodied and situated practice that AI cannot replicate, and resist accepting its outputs as neutral or authoritative.

7.4. Craft Relational, Reflexive, and Abductive Prompts

Design prompts that help AI surface ambiguity, novelty, and silences in the data, rather than simply confirming what is statistically probable. For example:
  • “Identify contradictions, tensions, or silences in the following transcripts.”;
  • “What themes would you expect in this data, and where does the data depart from those expectations?”;
  • “Suggest alternative interpretations that might explain the data in less obvious ways.”.

7.5. Maintain Transparency and Reflexivity

Maintain transparency by documenting how AI was used, including prompt design, generated outputs, and the interpretive decisions that followed. For best practice, consider keeping a detailed prompt log to include in appendices or supplementary materials. This supports reflexivity and enables others to evaluate the epistemic and ethical integrity of the analysis.

7.6. Resist Productivity Pressures

Do not allow institutional demands for speed or volume to justify superficial engagement with qualitative inquiry. Avoid outsourcing interpretive labor to AI solely for the sake of efficiency, as this risks undermining the foundational principles of qualitative inquiry. In an era defined by technological acceleration, where faster is better and more is better, the culture of qualitative research should offer a counterpoint. It should advance an epistemological and ethical stance attuned to the realities of living systems.

7.7. Make Data Analysis a Group Process

Consider forming research collectives to analyze data rather than working in isolation. This practice enhances credibility through triangulation and peer debriefing, while also mitigating the epistemic flattening that can occur when relying on AI. Engaging AI as a dialogic tool within a community of researchers ensures analytical rigor through shared interpretation and reflexive dialogue.
By adopting the above practices, qualitative researchers can approach AI not as a replacement for human analysis but as a dialogical partner, a reflective surface against which to test, challenge, and deepen interpretation. To provide a richer context for these practices, it is critical to situate them within the interpretive traditions of qualitative psychology and consider how AI’s role intersects with approaches such as Interpretative Phenomenological Analysis, Grounded Theory, and Discourse Analysis. As Lyons (2007) notes, different qualitative traditions ascribe distinct roles to the researcher in the production of knowledge. In Interpretative Phenomenological Analysis, for instance, the researcher performs a double hermeneutic: making sense of the participant’s sense-making (Smith & Osborn, 2007). AI’s intervention in this space must be carefully managed, lest it flatten the richness of interpretive engagement into superficial theme extraction. In contrast, Grounded Theory, particularly in its classic, more realist form, positions the researcher as a systematic discoverer of theory grounded solely in data. Even here, as Charmaz (2006) reminds us, the role of researcher subjectivity has been re-theorized as inherently constructive. Discourse Analysis adds another layer of complexity: it positions the researcher not simply as an interpreter of participants’ accounts but as an active co-author of the very realities those accounts construct. In this view, meaning is not merely uncovered or organized but produced through the interplay of participant narratives, social discourses, and the researcher’s own positioning (Georgaca & Avdi, 2011; Tseliou, 2020). In all these approaches, introducing AI alters the epistemic landscape. Researchers must reflexively consider whether they are merely delegating cognitive labor to the machine or indirectly transforming their epistemological stance.
In line with recent critiques of qualitative education (Gough & Lyons, 2016), it is vital that AI is not introduced into qualitative research training as just another analytical ‘tool’. To avoid what Brinkmann (2015) called the “McDonaldization” of qualitative methods, AI must be situated within a broader educational framework that emphasizes epistemology, reflexivity, and methodological integrity. Rather than being trained to use AI software procedurally, students and early career researchers should be supported in developing critical awareness of how AI operates, what assumptions it encodes, and how it aligns, or conflicts, with their chosen research paradigm. Just as qualitative researchers are trained to “think methodologically”, AI must be approached not as an add-on, but as a component that reshapes the research ecology itself.

8. Limitations

A central limitation of this work is that it seeks to map and critically examine a research terrain that is still in the process of emerging. Generative AI, and LLMs in particular, remain in their developmental infancy, and human–AI research ecology is evolving rapidly in ways that are difficult to predict. While this article draws on the available literature and early empirical explorations, it is necessarily speculative: emerging technologies, public policies, the strategic decisions of the large technology companies that drive AI development, the responses of research communities, and a multitude of other yet-unknown parameters will all co-shape the forms and functions AI will assume in qualitative inquiry. Any attempt to chart this territory risks being provisional, describing a landscape that may soon look very different. Furthermore, AI in qualitative research is not solely a question of data analysis. It has the potential to transform the entire research ecology, including methodological designs, epistemological commitments, ethical frameworks, and everyday research practices. This article focuses primarily on thematic data analysis, but the broader implications for qualitative research methods, teaching, and epistemic cultures warrant deeper exploration.
There is also an inherent tension in writing about AI’s role in qualitative research at this moment: such work functions as much as a call for action and a manifesto for a reflexive, value-driven research culture as it does a traditional literature review. The arguments presented here are grounded in current examples but are also informed by anticipatory thinking, an effort to shape the direction of AI adoption in ways that align with qualitative research’s relational and contextual ethos. As such, the analysis inevitably reflects the author’s positionality and epistemological stance. Other scholars, working within different research paradigms, may reach different conclusions or emphasize other aspects of AI integration.
Finally, while qualitative research can and should document and interrogate these ongoing transformations, it also participates in shaping them. This recursive involvement means that any account of AI in qualitative inquiry is both descriptive and performative, contributing to the very shifts it seeks to understand. The reflections offered here should therefore be read not as definitive conclusions, but as situated interventions in a moving, unsettled landscape, an invitation for ongoing, collective reflexivity as qualitative researchers navigate the evolving possibilities and inherent tensions of AI integration.

9. Concluding Thoughts

This paper brings together a relational–constructionist epistemology, a critique of extractive AI logics, and a concrete best-practices framework for integrating AI into qualitative psychological research without eroding its interpretive and ethical foundations. It seeks to contribute to the existing literature by offering an integrative framework that combines epistemological critique, ethical guidance, and concrete methodological strategies for AI-assisted qualitative data analysis. While much of the current work tends to focus either on the technical affordances of AI tools or on broad epistemological reflections, this paper aims to bridge these domains by grounding theoretical arguments in operationalizable best practices. In doing so, it moves beyond the efficiency-centered narratives prevalent in recent publications and frames AI not as a neutral mechanism for automating data analysis, but as a relationally entangled actor whose use should be critically and reflexively negotiated within qualitative traditions. Building on this foundation, and drawing from Gergen’s (2009, 2017) relational epistemology, qualitative research in psychology can be understood not merely as a method of data collection and analysis, but as a transformative relational practice that constitutes the very selves it studies. From this perspective, the meaning of psychological phenomena emerges not from bounded, pre-existing individuals but from the ongoing interplay of relationships in which both researchers and participants are embedded. This challenges the traditional individualist ontology that underpins much of psychological science and cautions against treating participants’ accounts as isolated “data points” to be extracted and analyzed (Gergen, 2009). Instead, the research encounter itself becomes a site of mutual constitution and potential transformation, where participants’ voices are co-constructed through dialogue and reflexive engagement. AI systems, when used uncritically, risk reinforcing the reductive, individualist, and extractive logic Gergen (2009) critiques by flattening participants’ narratives into decontextualized themes. Thus, the integration of AI into qualitative psychological research must preserve the relational ethos of inquiry, treating both the data and the analytical process as situated within a web of meaning-making relationships that shape, and are shaped by, the research process. This relational commitment aligns with the ethical and epistemological foundations of the Big Q qualitative psychology, facilitating not just understanding but transformation through relational responsibility.
However, in the modern academic reality that is often dominated by the publish or perish dogma (Chambers, 2017), and in an era of fast-paced life where more is better and faster is better (Capra & Luisi, 2014), I am uncertain about the ultimate impact of generative AI algorithms on qualitative research. AI assistance, if unexamined, could become replacement dressed up in efficiency. Many qualitative scholars may succumb to the temptation of increasing their productivity by publishing more studies, closer to the small q tradition, despite the potential pitfalls. The future is not given (Prigogine, 2003). I see a risk that qualitative analysis methods may shift toward a more quantitative positivist approach by relying more on AI algorithms and in this way losing all the relational, situated and cultural richness inherent in the qualitative research tradition, at least the Big Q. Moreover, artificial intelligence algorithms are often opaque in their operations (McQuillan, 2022), leading to a kind of algorithmic determinism and invisible epistemic violence: something is accepted as so simply because the algorithm says so, and therefore it should be accepted as such without question.
The art of qualitative research lies in the researcher’s engagement, and their judgment, creativity, and willingness to grapple with complexity. The machine, with its immense capacity for computation, can present the raw material, the initial patterns. But the act of meaning-making, the weaving of these patterns into a coherent, situated narrative, remains a uniquely human endeavor. The machine becomes a tool for extending our perception, a lens through which we might see patterns we otherwise miss, but never a substitute for the mind that interprets, questions, and ultimately, knows. Within a social constructionist epistemology, knowledge development is a relational conversation, not a monologue. And in that conversation, the human must always be an active and informed participant in the ongoing dialogue, guiding it toward insightful and novel understandings. In this direction, future inquiry might trace how Big Q traditions adapt when entangled with AI in psychologically sensitive settings, experiment with relational prompt strategies that keep participant voices present, and design ethical and pedagogical frameworks that nurture both technological literacy and reflexive integrity within qualitative research training.
Rather than outsourcing or substituting human action and agency, artificial intelligence models can be used to enhance and enrich the research process as a whole. This, however, depends on the choices we make. Whether we continue to cultivate qualitative inquiry as a deeply relational and transformative practice, augmented by AI-generated suggestions, in the spirit of the Big Q tradition, or shift toward a more positivist approach, characterized by automated, ostensibly objective yet more opaque data analysis in the small q tradition, will be determined by the research culture we cultivate in both the teaching and practice of qualitative methods. At its core, qualitative inquiry is relational and transformative at every stage: from the co-creation of data with participants, grounded in trust and mutual meaning-making, to the reflexive interpretation of that data during analysis. As Bateson (1972) reminds us, learning and change often emerge through paradoxical tensions within systems, where contradictory messages or double binds catalyze new forms of understanding. In much the same way, AI-assisted qualitative inquiry should not be seen (only) as a means of (always) resolving complexity, but rather as a generative space where contradictions between algorithmic predictions and human interpretation create the very conditions for deeper reflexivity and transformative insight. This perspective invites qualitative researchers to embrace what Bateson called the pattern that connects, recognizing that meaning is not found by eliminating contradictions but by inhabiting and exploring the relational paradoxes that animate human and technological entanglements throughout the research process. The future is not given:
In the center of Fedora, that gray stone metropolis stands a metal building with a crystal globe in every room. Looking into each globe, you see a blue city, the model of a different Fedora. These are the forms the city could have taken if, for one’ reason or another, it had not become what we see today.”

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
QDAQualitative Data Analysis
AIArtificial Intelligence
LLMsLarge Language Models

References

  1. Adesso, G. (2023). Towards the ultimate brain: Exploring scientific discovery with ChatGPT AI. AI Magazine, 44(3), 328–342. [Google Scholar] [CrossRef]
  2. Ahmed, S. K. (2024). The pillars of trustworthiness in qualitative research. Journal of Medicine, Surgery, and Public Health, 2, 100051. [Google Scholar] [CrossRef]
  3. Andreotti, V. D. O. (2024). Burnout from humans: A little book about AI that is not really about AI. Gesturing Towards Decolonial Futures. Available online: https://burnoutfromhumans.net (accessed on 1 May 2025).
  4. Aronson, J. (1995). A pragmatic view of thematic analysis. The Qualitative Report, 2(1), 1–3. [Google Scholar] [CrossRef]
  5. Barad, K. (2006). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press. [Google Scholar] [CrossRef]
  6. Bateson, G. (1972). Steps to an ecology of mind. University of Chicago Press. [Google Scholar]
  7. Bignami, E. G., Vittori, A., Lanza, R., Compagnone, C., Cascella, M., & Bellini, V. (2023). The clinical researcher journey in the artificial intelligence era: The PAC-MAN’s challenge. Healthcare, 11(7), 975. [Google Scholar] [CrossRef]
  8. Bilal, A., Ebert, D., & Lin, B. (2025). LLMs for explainable AI: A comprehensive survey (version 1). arXiv. [Google Scholar] [CrossRef]
  9. Brailas, A. (2024). Postdigital duoethnography: An inquiry into human-artificial intelligence synergies. Postdigital Science and Education, 6(2), 486–515. [Google Scholar] [CrossRef]
  10. Brailas, A. (2025). Replication crisis in psychology, second-order cybernetics, and transactional causality: From experimental psychology to applied psychological practice. Integrative Psychological and Behavioral Science, 59(1), 14. [Google Scholar] [CrossRef]
  11. Brailas, A., & Papachristopoulos, K. (2023). Systems thinking, rhizomes, and community-based qualitative research: An introduction to nomadic thematic analysis. In E. Tseliou, C. Demuth, E. Georgaca, & B. Gough (Eds.), The routledge international handbook of innovative qualitative psychological research (1st ed., pp. 304–319). Routledge. [Google Scholar] [CrossRef]
  12. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar] [CrossRef]
  13. Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597. [Google Scholar] [CrossRef]
  14. Braun, V., & Clarke, V. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qualitative Research in Psychology, 18(3), 328–352. [Google Scholar] [CrossRef]
  15. Braun, V., Clarke, V., Hayfield, N., & Terry, G. (2019). Thematic analysis. In P. Liamputtong (Ed.), Handbook of research methods in health social sciences (pp. 843–860). Springer Singapore. [Google Scholar] [CrossRef]
  16. Brinkmann, S. (2015). Perils and potentials in qualitative psychology. Integrative Psychological and Behavioral Science, 49(2), 162–173. [Google Scholar] [CrossRef]
  17. Brinkmann, S. (2024). Persons in a posthuman world. Qualitative Research in Psychology, 22(3), 596–612. [Google Scholar] [CrossRef]
  18. Bryda, G., & Sadowski, D. (2024). From words to themes: AI-powered qualitative data coding and analysis. In J. Ribeiro, C. Brandão, M. Ntsobi, J. Kasperiuniene, & A. P. Costa (Eds.), Computer supported qualitative research (Vol. 1061, pp. 309–345). Springer Nature Switzerland. [Google Scholar] [CrossRef]
  19. Calvino, I. (1974). Invisible cities. Harcourt Brace & Company. [Google Scholar]
  20. Camargo-Borges, C., & McNamee, S. (2022). Design thinking & social construction: A practical guide to innovation in research. BIS. [Google Scholar]
  21. Capra, F., & Luisi, P. L. (2014). The systems view of life: A unifying vision. Cambridge University Press. [Google Scholar]
  22. Chambers, C. (2017). The seven deadly sins of psychology: A manifesto for reforming the culture of scientific practice. Princeton University Press. [Google Scholar]
  23. Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Sage Publications. [Google Scholar]
  24. Chenail, R. J. (2012). Conducting qualitative data analysis: Reading line-by-line, but analyzing by meaningful qualitative units. The Qualitative Report, 17(1), 266–269. [Google Scholar] [CrossRef]
  25. Christou, P. (2023). The use of artificial intelligence (AI) in qualitative research for theory development. The Qualitative Report, 28(9), 2739–2755. [Google Scholar] [CrossRef]
  26. Christou, P. (2024). Thematic analysis through artificial intelligence (AI). The Qualitative Report, 29(2), 560–576. [Google Scholar] [CrossRef]
  27. Chubb, L. A. (2023). Me and the machines: Possibilities and pitfalls of using artificial intelligence for qualitative data analysis. International Journal of Qualitative Methods, 22, 16094069231193593. [Google Scholar] [CrossRef]
  28. Creswell, J. W., & Poth, C. N. (2025). Qualitative inquiry & research design: Choosing among five approaches (5th ed.). Sage. [Google Scholar]
  29. Dafermos, M. (2018). Rethinking cultural-historical theory: A dialectical perspective to Vygotsky (Vol. 4). Springer Singapore. [Google Scholar] [CrossRef]
  30. Dafermos, M. (2020). Reconstructing the fundamental ideas of Vygotsky’s theory in the contemporary social and scientific context. In A. Tanzi Neto, F. C. Liberali, & M. Dafermos (Eds.), Revisiting Vygotsky for social change: Bringing together theory and practice. Peter Lang Publishing, Inc. [Google Scholar]
  31. De Paoli, S. (2024). Performing an inductive thematic analysis of semi-structured interviews with a large language model: An exploration and provocation on the limits of the approach. Social Science Computer Review, 42(4), 997–1019. [Google Scholar] [CrossRef]
  32. Dröge, K. (2025, May 27). Why AI has a “proving the obvious” problem, and what we can do about it. CAQDAS networking project blog. Available online: https://blogs.surrey.ac.uk/caqdas/2025/05/27/why-ai-has-a-proving-the-obvious-problem-and-what-we-can-do-about-it (accessed on 5 June 2025).
  33. Dyson, F. J. (2006). The scientist as rebel. New York Review Books. [Google Scholar]
  34. Georgaca, E., & Avdi, E. (2011). Discourse analysis. In D. Harper, & A. R. Thompson (Eds.), Qualitative research methods in mental health and psychotherapy (1st ed., pp. 147–161). Wiley. [Google Scholar] [CrossRef]
  35. Gergen, K. J. (2009). Relational being: Beyond self and community. Oxford University Press. [Google Scholar]
  36. Gergen, K. J. (2017). Human essence: Toward a relational reconstruction. In M. Van Zomeren, & J. F. Dovidio (Eds.), The oxford handbook of the human essence (pp. 247–260). Oxford University Press. [Google Scholar] [CrossRef]
  37. Gough, B., & Lyons, A. (2016). The future of qualitative research in psychology: Accentuating the positive. Integrative Psychological and Behavioral Science, 50(2), 234–243. [Google Scholar] [CrossRef]
  38. Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91–108. [Google Scholar] [CrossRef]
  39. Heron, J., & Reason, P. (2008). Extending epistemology within a co-operative inquiry. In P. Reason, & H. Bradbury (Eds.), Handbook of action research: Participative inquiry and practice (2nd ed., pp. 366–380). SAGE Publications. [Google Scholar]
  40. Hitch, D. (2024). Artificial intelligence augmented qualitative analysis: The way of the future? Qualitative Health Research, 34(7), 595–606. [Google Scholar] [CrossRef]
  41. Kasirzadeh, A. (2025). Two types of AI existential risk: Decisive and accumulative. Philosophical Studies, 182, 1975–2003. [Google Scholar] [CrossRef]
  42. Kasperiuniene, J., & Mazeikiene, N. (2024). AI-enhanced qualitative research: Insights from Adele Clarke’s situational analysis of TED talks. The Qualitative Report, 29(9), 2502–2526. [Google Scholar] [CrossRef]
  43. Kidder, L. H., & Fine, M. (1987). Qualitative and quantitative methods: When stories converge. New Directions for Program Evaluation, 1987(35), 57–75. [Google Scholar] [CrossRef]
  44. Lamb, D., Russell, A., Morant, N., & Stevenson, F. (2024). The challenges of open data sharing for qualitative researchers. Journal of Health Psychology, 29(7), 659–664. [Google Scholar] [CrossRef] [PubMed]
  45. Lindsey, J., Gurnee, W., Ameisen, E., Chen, B., Pearce, A., Turner, N. L., Citro, C., Abrahams, D., Carter, S., Hosmer, B., Marcus, J., Sklar, M., Templeton, A., Bricken, T., McDougall, C., Cunningham, H., Henighan, T., Jermyn, A., Jones, A., … Batson, J. (2025). On the biology of a large language model: We investigate the internal mechanisms used by Claude 3.5 Haiku—Anthropic’s lightweight production model—In a variety of contexts, using our circuit tracing methodology. Available online: https://transformer-circuits.pub/2025/attribution-graphs/biology.html (accessed on 1 May 2025).
  46. Lochmiller, C. (2021). Conducting thematic analysis with qualitative data. The Qualitative Report, 26(6), 2029–2044. [Google Scholar] [CrossRef]
  47. Lyons, E. (2007). Analysing qualitative data: Comparative reflections. In E. Lyons, & A. Coyle (Eds.), Analysing qualitative data in psychology (pp. 158–174). SAGE Publications, Ltd. [Google Scholar] [CrossRef]
  48. Marshall, D. T., & Naff, D. B. (2024). The ethics of using artificial intelligence in qualitative research. Journal of Empirical Research on Human Research Ethics, 19(3), 92–102. [Google Scholar] [CrossRef]
  49. Mason, J. (2017). Qualitative researching (3rd ed.). SAGE Publications. [Google Scholar]
  50. McQuillan, D. (2022). Resisting AI: An anti-fascist approach to artificial intelligence. Bristol University Press. [Google Scholar]
  51. Morgan, H. (2022). Understanding thematic analysis and the debates involving its use. The Qualitative Report, 27(10), 2079–2090. [Google Scholar] [CrossRef]
  52. Nguyen-Trung, K. (2025). ChatGPT in thematic analysis: Can AI become a research assistant in qualitative research? Quality & Quantity. Advance online publication. [Google Scholar] [CrossRef]
  53. Nicmanis, M., & Spurrier, H. (2025). Getting started with artificial intelligence assisted qualitative analysis: An introductory guide to qualitative research approaches with exploratory examples from reflexive content analysis. International Journal of Qualitative Methods, 24, 16094069251354863. [Google Scholar] [CrossRef]
  54. Pattyn, F. (2024). The value of generative AI for qualitative research: A pilot study. Journal of Data Science and Intelligent Systems, 3(3), 184–191. [Google Scholar] [CrossRef]
  55. Paulus, T., Lester, J. N., & Davis, C. The construction of the role of AI in qualitative data analysis in the social sciences. AI & SOCIETY. Advance online publication. [CrossRef]
  56. Perkins, M., & Roe, J. (2024). The use of Generative AI in qualitative analysis: Inductive thematic analysis with ChatGPT. Journal of Applied Learning & Teaching, 7(1), 390–395. [Google Scholar] [CrossRef]
  57. Pownall, M. (2024). Is replication possible in qualitative research? A response to Makel et al. (2022). Educational Research and Evaluation, 29(1–2), 104–110. [Google Scholar] [CrossRef]
  58. Prigogine, I. (2003). Is future given? World Scientific. [Google Scholar]
  59. Raile, P. (2024). The usefulness of ChatGPT for psychotherapists and patients. Humanities and Social Sciences Communications, 11(1), 47. [Google Scholar] [CrossRef]
  60. Rocher, L., Hendrickx, J. M., & De Montjoye, Y.-A. (2019). Estimating the success of re-identifications in incomplete datasets using generative models. Nature Communications, 10(1), 3069. [Google Scholar] [CrossRef] [PubMed]
  61. Sandle, R., Gough, B., Day, K., & Muskett, T. (2024). Making the universe together: Baradian inspirations for the future of qualitative psychology. Qualitative Research in Psychology, 22(3), 680–707. [Google Scholar] [CrossRef]
  62. Shah, C. (2025). From Prompt Engineering to Prompt Science with Humans in the Loop. Communications of the ACM, 68(6), 54–61. [Google Scholar] [CrossRef]
  63. Sharma, A., Czégel, D., Lachmann, M., Kempes, C. P., Walker, S. I., & Cronin, L. (2023). Assembly theory explains and quantifies selection and evolution. Nature, 622(7982), 321–328. [Google Scholar] [CrossRef]
  64. Shotter, J. (2012). Gergen, confluence, and his turbulent, relational ontology: The constitution of our forms of life within ceaseless, unrepeatable, intermingling movements. Psychological Studies, 57(2), 134–141. [Google Scholar] [CrossRef]
  65. Sirisathitkul, C. (2023). Slow writing with ChatGPT: Turning the hype into a right way forward. Postdigital Science and Education, 6(2), 431–438. [Google Scholar] [CrossRef]
  66. Smith, J. A., & Osborn, M. (2007). Pain as an assault on the self: An interpretative phenomenological analysis of the psychological impact of chronic benign low back pain. Psychology & Health, 22(5), 517–534. [Google Scholar] [CrossRef]
  67. Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research, 104, 333–339. [Google Scholar] [CrossRef]
  68. Tseliou, E. (2020). Discourse analysis and systemic family therapy research: The methodological contribution of discursive psychology. In M. Ochs, M. Borcsa, & J. Schweitzer (Eds.), Systemic research in individual, couple, and family therapy and counseling (pp. 125–141). Springer International Publishing. [Google Scholar] [CrossRef]
  69. Von Foerster, H. (1984). On constructing a reality. In P. Watzlawick (Ed.), The invented reality: How do we know what we believe we know?: Contributions to constructivism (1st ed., pp. 41–61). Norton. [Google Scholar]
  70. Von Foerster, H. (2003). Ethics and second-order cybernetics. In H. Von Foerster (Ed.), Understanding understanding (pp. 287–304). Springer New York. [Google Scholar] [CrossRef]
  71. Wolf, T. (2025). The einstein AI model. Available online: https://thomwolf.io/blog/scientific-ai.html (accessed on 5 August 2025).
  72. Wong, M. L., Cleland, C. E., Arend, D., Bartlett, S., Cleaves, H. J., Demarest, H., Prabhu, A., Lunine, J. I., & Hazen, R. M. (2023). On the roles of function and selection in evolving systems. Proceedings of the National Academy of Sciences of the United States of America, 120(43), e2310223120. [Google Scholar] [CrossRef] [PubMed]
Table 1. Main Epistemological Stances and Perceived Pros/Cons of Different Approaches to Qualitative Data Analysis.
Table 1. Main Epistemological Stances and Perceived Pros/Cons of Different Approaches to Qualitative Data Analysis.
ApproachImplied Epistemological StancePerceived Advantages *Perceived Disadvantages *
Human-performed
data analysis
From positivist to
constructionist
Deep reflexivity; rich contextual interpretation; preservation of researcher–participant relationalityLabor-intensive; time-consuming; potential for researcher bias (in positivist view); lower replicability across analysts
AI-assisted,
in the small q tradition
Positivist/
post-positivist
Speed and efficiency; handles large data volumes; high inter-coder reliability; perceived objectivityRisk of epistemic flattening; reproduces dominant narratives; neglect of contextual nuance; potential data privacy breaches
AI-assisted,
in the Big Q tradition
Relational/social constructionistHeuristic partner for abductive reasoning; can facilitate the surface of contradictions and unexpected patterns; reflexive dialogic engagement with dataRequires high AI literacy; algorithmic opacity; risk of unintentionally shifting the research toward small q and extractive logics
* The perceived advantages and disadvantages are subjective. They are contingent upon, and vary according to, the epistemological stance of each researcher.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brailas, A. Artificial Intelligence in Qualitative Research: Beyond Outsourcing Data Analysis to the Machine. Psychol. Int. 2025, 7, 78. https://doi.org/10.3390/psycholint7030078

AMA Style

Brailas A. Artificial Intelligence in Qualitative Research: Beyond Outsourcing Data Analysis to the Machine. Psychology International. 2025; 7(3):78. https://doi.org/10.3390/psycholint7030078

Chicago/Turabian Style

Brailas, Alexios. 2025. "Artificial Intelligence in Qualitative Research: Beyond Outsourcing Data Analysis to the Machine" Psychology International 7, no. 3: 78. https://doi.org/10.3390/psycholint7030078

APA Style

Brailas, A. (2025). Artificial Intelligence in Qualitative Research: Beyond Outsourcing Data Analysis to the Machine. Psychology International, 7(3), 78. https://doi.org/10.3390/psycholint7030078

Article Metrics

Back to TopTop