You are currently viewing a new version of our website. To view the old version click .
Societies
  • Article
  • Open Access

12 December 2025

Algorithms in Scientific Work: A Qualitative Study of University Research Processes Between Engagement and Critical Reflection

Department of Political and Social Studies, University of Salerno, 84084 Fisciano, Italy
This article belongs to the Special Issue Algorithm Awareness: Opportunities, Challenges and Impacts on Society

Abstract

This study examines the role of algorithms—particularly artificial intelligence—in scientific research processes and how automation intersects with expert knowledge and the autonomy of the researcher. Drawing on 25 qualitative interviews with Italian university scholars in the social sciences and humanities, the research explores how academics either incorporate or resist AI at various stages in their scientific work, the strategies they employ to manage the relationship between professional expertise and algorithmic systems and the forms of trust, caution or scepticism that characterise these interactions. The findings reveal diverse patterns of use, non-use and critical engagement, ranging from instrumental and efficiency-oriented adoption to dialogical experimentation and from identity-based resistance to systemic reflexivity regarding the institutional implications of AI. The study also highlights the need to thoroughly examine the characteristics of disciplinary scientific cultures, while highlighting the importance of promoting algorithmic awareness to support scientific rigour in the digital age.

1. Introduction

Recent developments in Artificial Intelligence (AI), particularly generative models (or LLMs), have led to increasingly wider use in scientific research.
Far from being considered mere technical tools, these technologies exert a significant impact across multiple dimensions, shaping practices of scientific knowledge production, the ways in which research is carried out and how it is regulated and, at a deeper level, the value systems and professional identity of those who conduct research. This paper explores how scientific work is being reconfigured through the introduction of IA, paying particular attention to the practices, meanings and forms of reflexivity that shape the relationship between researchers and algorithmic systems. It examines how university scholars either integrate or resist using AI at different stages of research, how they negotiate the boundary between expert knowledge and algorithmic automation, and how potential forms of trust, distrust and critical awareness of these tools orient their decisions. The study also investigates how academics define the autonomy and identity of scientific work within a context marked by increasing levels of automation. Additionally, a cross-cutting analytical dimension concerns algorithmic awareness, understood as the set of cognitive, behavioural, reflective, and affective dimensions [1] through which scholars recognise, evaluate and orient the role of algorithms within their research practices.
More specifically, the study is based on qualitative empirical research focusing on a group of Italian academics working in the social sciences and humanities, whose experiences provide insight into how AI is incorporated into research cultures and practices.
The work is part of a broad scenario of international studies dedicated to the spread of AI in the context of university research [2,3,4,5].
In the Italian context, sociological attention has previously focused mainly on students and their learning processes, while the transformations in research practices and the scientific activity of researchers have only recently become the subject of institutional regulation and reflection. Several universities (including Bologna, Trento and Siena) have developed policies and guidelines for the ethical and responsible use of AI for the entire academic community. Furthermore, in 2025, the CRUI established a framework agreement enabling associated Italian universities to access ChatGPT Edu—providing access to the latest model available at the time (e.g., GPT-4o)—on favourable terms for students, teachers and researchers, thereby encouraging its use in both teaching and research.
Despite these developments, there are still a limited number of studies focusing on the everyday practices of researchers, the ways in which they organise and orient their cognitive work, forms of reflexivity and resistance and their relationships of trust towards the algorithmic systems underlying AI tools.
The article is organised into the following sections: the paragraph below outlines the theoretical framework, focusing on the intersection between artificial intelligence and scientific practices, the configuration of expert knowledge, forms of trust and uncertainty, representations of researcher identity and the role of algorithmic awareness in mediating these interactions.
The qualitative methodological approach is then described, detailing the development of the interview outline, its administration, transcription, analysis and interpretation of the results obtained. This is followed by a presentation of the main findings: AI and Research Approaches; Practices of Non-Use and Forms of Resistance; Configurations of Trust; Expert Judgement and Algorithmic Mediation; and Professional Identity and Autonomy in Scientific Research. The paper concludes by interpreting the results into a set of main approaches that reflect the prevailing ways in which researchers relate to AI.

2. Artificial Intelligence, Expert Knowledge and the Practices of Scientific Work

Artificial intelligence (IA), through its diverse tools and approaches, is increasingly being employed in research activities and scientific knowledge production processes [6]. Many studies highlight how the adoption of AI influences various stages of academic work—whether on a theoretical, methodological or operational level [7]—with applications that differ considerably across disciplines [8]. Even within the humanities and social sciences, which have traditionally been less reliant on the use of technology than the hard sciences, AI is now widely utilised [9]: from bibliographic research to information synthesis, from editing to idea generation, and from data cleaning to synthetic data creation, to analysis and trend identification, interpretation and preparation of formal documents or presentations. Recent research suggests [10] that while AI is considered particularly effective for repetitive tasks, it is now also playing a growing role in activities designed by researchers themselves, including those requiring significant levels of interpretative and conceptual input.
The so-called “tricks of the trade” referenced by Becker [11] thus gradually seem to be redefined through interaction with automated generative systems. Scientific apprenticeship and the collaborative nature of research are also evolving, as activities are increasingly being developed “with” and “through” automated tools. As a result, on the one hand certain research tasks are being automated with greater frequency; on the other, researchers now need to acquire new skills, including the ability to guide, supervise and “teach” AI tools, for example, by guiding them in the interpretation of outputs. At the same time, concerns are emerging regarding the potential of obtaining false information, discriminatory results [12,13] or results that may be oversimplified or lack the analytical depth that underpins scientific reasoning.
Within this context, the relationship between automated systems and expert knowledge in scientific research becomes particularly complex, unfolding along multiple dimensions.
This article examines the relationship between artificial intelligence and scientific research, starting from the conceptual foundations of three principal pillars: expertise, defined as the set of theoretical and methodological competencies that constitute expert knowledge, and algorithmic awareness which encompasses the various dimensions required when interacting with algorithms and AI systems; trust, understood as a way of managing uncertainty towards algorithmic systems; and the identity work of the researcher, based on the analysis of the practices through which scholars determine what can and cannot be delegated to automation whilst preserving the boundaries of their professional role. These three axes guide both the construction of the empirical design and the interpretation of the results.
A central aspect concerns the reconfiguration of expertise, which has been reshaped in response to changes in the relationship between science and technology [14,15]. In the case of AI, those who interact with generative models need to be able to interface with algorithmic systems, recognise their biases and limitations and formulate appropriate prompts. Thus, contemporary expertise increasingly incorporates skills related to algorithmic awareness, as these skills directly influence scientific practices and outputs. Algorithmic awareness refers to the extent to which individuals recognise the presence of algorithms within digital applications and understand their primary functions [16].
Recent studies on this subject view it as a phenomenon that can be divided into four main dimensions: cognitive, behavioural, reflective and affective [1,17]. Summarising the conceptualisation provided by Felaco [1,17], it is possible to consider how each dimension influences research practices.
According to Felaco [1], the cognitive dimension concerns the recognition of algorithms and an understanding of how they work. In the scientific context, this means acknowledging that many activities are mediated by algorithmic infrastructures—for instance, searching for academic articles on digital platforms that operate according to specific indexing criteria.
The behavioural dimension is evident in everyday practices, reflecting how users engage with algorithms: shaping content feeds, instructing algorithms, or attempting to influence the outputs. For researchers, this may also involve training models, verifying results, and both using and designing prompts.
The reflective dimension, which concerns the critical evaluation of the processes, outputs and effects of algorithms [18], plays a central role in scientific activities, where researchers are able to raise critical issues about the validity and quality of the knowledge obtained as well as the processes underlying its production.
The affective dimension emphasises the emotional reactions generated by interaction with algorithms: surprise, enthusiasm, trust, frustration, or suspicion, based on either accurate or misleading results. The emotions of researchers can therefore have an impact on their decisions on whether or not to adopt AI.
In this context, trust in algorithmic systems plays a central role. It can serve to reduce complexity [19] and, within the field of AI, may simplify cognitive tasks, but it also introduces new forms of uncertainty. Drawing on Giddens’ [20] perspective regarding the relationship between trust and abstract systems, AI can be seen as an abstract system in which individuals place their trust, even though they are unable to verify it directly. Recent analyses build on this view: for example, Kopsaj [21] argues that trust is not solely concerned with the technical quality of outputs, but also with the social, political and cognitive structure embedded within algorithms and the institutions that govern them.
The researcher’s trust in AI systems can therefore be articulated on different levels: trust placed in the direct interaction with AI, grounded in familiarity and personal experience [22]; and a more general trust, linked to the broader institutional, regulatory and ethical frameworks that produce and regulate AI. This distinction can also be linked to classic reflections on trust. Indeed, it can take both a generalised form and a more situated, interpersonal one [23]. Although referring to different contexts, a similar dynamic can be adopted here as researchers can combine a form of trust grounded in their concrete, hands-on experience with AI with a broader trust—or mistrust—directed toward the systems, actors and institutional logics that govern these technologies.
In this vein, Li et al. [24] propose a three-dimensional model in which trust derives from the interplay between factors related to the subject placing trust (based on their skills and experience) and the subject receiving trust (based on transparency, fairness and security), as well as the context (characterised by specific policies, norms and culture). Jung et al. [5] also highlight how the use of IA promotes increased trust, demonstrating that familiarity and involvement are associated with a greater perception of reliability.
When new practices and technical agents become part of a scientist’s routine, the identity of the researcher can be redefined. One case in point is academic writing, on which opinions diverge. Masters [25] argues, for example, that academic writing is the most mechanical aspect of scientific work and that AI could take over this role, thereby freeing researchers for more creative activities. Ellaway [26], in contrast, contends that writing is an intrinsic component of the cognitive process: arguing, analysing and interpreting are not functions that can be delegated without the risk of losing autonomy and skills.
These tensions illustrate how the partial delegation of interpretative skills can challenge the boundaries of expertise and the identity representation of the researcher [27]. According to Selenko et al. [28], AI can integrate, replace or generate new functions, creating identity threats when it intervenes in activities perceived as central, or opening up room for renegotiation when it amplifies or transforms existing tasks. In their study, Thominet et al. [29] identify four different roles that researchers may assume in their interactions with AI: the manager, who carefully supervises the model’s work, verifying its reliability; the teacher, who instructs the model on theories and methods, guiding its behaviour; the colleague, who engages in dialogue with AI as an equal, comparing interpretations and hypotheses; and finally, the advocate who uses AI to enhance the experience of users or participants, acting as a spokesperson for their needs. Similarly, Costa et al. [4] demonstrate how AI can function as a co-researcher, influencing the very meaning of “doing research”: activities such as coding, data reviewing or preliminary text production are progressively shared or delegated to generative models, thus shaping the researcher’s perception of their professional role. Based on these considerations regarding the complex relationship between the various elements linking expert knowledge and algorithmic knowledge, empirical research was designed and conducted to investigate how these dynamics operate within contemporary research practices.

3. Methodology

This section outlines the design of empirical research focused on analysing the relationship between artificial intelligence and contemporary scientific practices in academia.
Specifically, 25 qualitative interviews were conducted with Italian academics in the social sciences and humanities. The interviewees included scholars from a range of disciplines, including sociology, social sciences methodology and social research techniques, social statistics, business organisation, history, anthropology, linguistics and philosophy. The decision to focus on a plurality of social science disciplines—spanning interpretive, theoretically oriented fields as well as more analytically driven ones—reflects the need to understand how the introduction of artificial intelligence intersects with different scientific processes approaches, from the centrality of interpretation and the situated construction of meaning to traditions grounded in formalisation and measurement. At the same time, these disciplines have been characterised by varying degrees of familiarity with technology, allowing us to observe both points of contact and differences in methods of investigation and modes of scientific knowledge production.
The selected group was diverse in terms of gender (13 men and 12 women), age (lower age range: 8 participants aged up to 42; middle age range: 9 participants aged 43–51; upper age range: 8 participants over 52) and academic position (early-career: 8 researchers/junior researchers; mid-career: 9 associate professors; senior: 8 full professors). This diversity made it possible to explore whether the relationship with algorithms and AI systems varies depending on the nature of the disciplinary field and the scientific cultures of reference, as well as to identify potential differences linked to other participant characteristics. In particular, age and academic status emerge as important factors for understanding the possible relationship between career trajectories and AI. The choice of a qualitative interview was driven by the need to highlight the centrality of the interviewee’s point of view and to reconstruct, through the dialogical approach inherent to this technique, the universe of meanings, practices and reflections that guide their daily actions [30]. Specifically, this technique, which is centred on interpersonal communication dynamics, allows for in-depth exploration of the subjective dimension, revealing the thoughts, beliefs, values and self-interpretations of researchers, while also activating a process of reflexivity that enables interviewees to reconsider and reflect on their own experiences [31].
The cognitive objectives of the research were operationalized into a series of dimensions that shaped the interview outline: experiences of use, practices of non-use, level of trust, expert judgement and algorithmic mediation, identity representation by the researcher and future prospects of AI in scientific research. Specifically, initial questions explored how AI and algorithmic tools were incorporated into different phases of scientific work. Particular attention was paid to the perception of strengths and limitations, in order to understand how the interviewees evaluated IA tools and approaches and to discern varying levels of algorithmic awareness within their research practices.
An additional aspect explored in the study concerned practices of non-use and resistance to AI, in order to identify the main reasons that lead some researchers to limit or avoid its use. A central part of the interview focused on analysing the researcher’s levels of trust in AI, their views on the degree of reliability attributed to algorithmic tools, the range of verification methods they employed and the elements of uncertainty associated with the models used. In this context, the relationship between automation and expert knowledge was also analysed, taking into account how scientific activities are organised and managed. These issues enabled an analysis of the level of intellectual autonomy that researchers maintain within a broader system increasingly characterised by interdependence between human and technological components.
A significant dimension of the study concerned the impact of AI on how participants define the autonomy and scope of scientific work, particularly in terms of how they articulate the identity of scientific practice and their self-perception as researchers. Finally, the interviewees were invited to reflect on the future of the relationship between research practices and IA.
The interviews were conducted and recorded, then transcribed and analysed using qualitative research procedures [31,32]. In particular, a coding framework was developed that allowed the material to be grouped into thematic classes and interpretative cores, understood as sets of textual elements that are interconnected through semantic and lexical relationships of proximity and contrast [32]. This analytical process provided a nuanced and comprehensive picture of the ways in which AI is intertwined with scientific research practices and the role of the researcher, identifying recurring patterns as well as elements of differentiation.

4. AI and Research Approaches

The first part of the interview focused on the researchers’ experiences with using algorithms and artificial intelligence in scientific research. All the interviewees stated that they had interacted with such systems, either directly or indirectly, and had integrated them, albeit in different ways, into their scientific work practices. Specifically, an analysis of the responses given highlighted three main approaches in the way AI is used, suggesting that there are different ways of conceptualising the role of automation within scientific activities. These approaches should be understood as types of AI usage rather than a rigid classification applying to a certain group of scholars. Looking at the results, it is clear that the boundaries between these approaches are fluid and that individual researchers sometimes combine elements from multiple perspectives depending on the task they want to carry out, disciplinary characteristics of the subject, alongside other aspects such as their career paths and experiences.
The first approach can be described as one in which artificial intelligence is employed as a technical and operational support which is integrated into routine academic tasks. This use appears most commonly among academics in the methodology of social research, statistics and business organisation, fields that have traditionally been oriented towards more procedural and data-intensive work. In particular, respondents report using both generalist software—such as ChatGPT, Gemini or WenQ—for writing, translation, preparation of teaching materials and summarising texts, as well as specialist applications to manage bibliographies and automated source searches. With regard to the latter, given the exponential growth of information processes and content facilitated by digitalisation and scientific production, this group of researchers believes that AI supports researchers in the content selection phase, which, according to the interviewees, would be more complicated to carry out manually. In this case, AI serves as a system for mitigating the growing complexity of scientific knowledge, which is constantly and rapidly expanding. On the other hand, very few respondents reported using AI for data analysis due to a high level of mistrust in the results obtained. Some interviewees also added that these processes are based on algorithmic criteria whose underlying procedure is unknown to the user. This form of use aligns with literature highlighting AI as a rationalisation resource that promotes efficiency while maintaining the interpretative authority with the researcher themselves [2,10]—a perspective that is consistent with broader reflections on AI as an augmentation rather than a substitution tool in professional contexts [33].
This type of use is accompanied by a certain degree of reflexivity: interviewees recognise the potential of the tools to increase efficiency in their work, but also the need to maintain a constant level of control. As one scholar points out: “The first rule of thumb is that you always have to check step by step what artificial intelligence produces as output”. One researcher in the field of statistics confirms: “I use it for revision or formatting in LaTeX, but it remains a technical support: methodological control cannot be delegated”. In these cases, AI is functional to efficiency and productivity, but does not change the structure of the research. Referring back to the typology of Thominet et al. [29], the researchers in this approach appear to assume the role of a manager, as they are careful to control and supervise the activities carried out by AI.
These findings suggest that in this orientation AI serves as an optimiser of procedures within existing cognitive workflows [34]. Hence, researchers integrate AI as an operational support, combining their disciplinary skills and algorithmic awareness to exploit its potential. Expertise is not replaced, but supplemented with new resources to manage the increasing complexity, time, and workload involved in academic scientific research.
A second approach, mainly adopted by scholars in the humanities and historical fields, demonstrates a more creative and personalised use of IA within research practices. One interviewee, who carries out research into contemporary history, states that he has developed a personalised prompt when using ChatGPT in order to ensure that his interactions follow criteria consistent with historical writing and reasoning, calibrating the tone, style and disciplinary references he obtains. This prompt reads: “From now on, you must think like a historian… analyse my assumptions, provide counterarguments and test my logic”. In this scenario, AI acts as a disciplined interlocutor and cognitive tool that facilitates the testing of reasoning, albeit under strict human guidance. As he then specifies: “I don’t approach artificial intelligence to be replaced by it, but to be able to control it”. This finding supports the idea that the researcher can train the models to closely match disciplinary reasoning patterns.
Considering the “behavioural” and “reflective” dimensions of algorithmic awareness [1,17], it would seem that the respondents of this approach not only interacts with the model, adapting it to their own purposes, but also critically evaluates the quality of the interaction, regulating the output within the co-cognitive aims of their research subject. Furthermore, the role of the researcher would seem to fall within that of the “teacher” according to the typology of Thominet et al. [29], who guides and trains AI to come up with counterproposals, assumptions and results.
This orientation suggests a way of conducting research in which interpretation remains at the core of the researcher’s expertise; in this case, AI takes on a cognitive stimulus function with which to compare hypotheses, categories and narratives. Integrated into reasoning processes, it remains subordinate to human interpretative work. It is important to note that these approaches do not correspond rigidly to definitive categories of researchers: there are certain interviewees, such as methodologists, who combine operational uses with scepticism towards data analysis, or others, such as historians who display creative yet cautious positions.
A third approach is seen in those who maintain a critical distance from the use of AI. This stance is prevalent among sociologists with theoretical and critical orientations, such as philosophers and some linguists, who use AI only for low-risk tasks (e.g., text revision), expressing doubts about its ability to support scientific reasoning. As one sociologist reports: “It doesn’t come naturally to me to use it… I consider scientific work to be something artisanal”. Another declares: “I find it hard to believe that artificial intelligence can produce anything that is not external to me”. In the field of linguistics, use is selective: “I only use these systems to rephrase in English… I don’t trust them for scientific analysis”. In this case, scientific activity, in line with recent literature on epistemic risks and standardisation [25,26] is claimed to be a tailormade and localised practice, which requires slower reflection times, greater forms of in-depth analysis and theoretical sensitivity that cannot be automated. The choice to make a highly marginal use of AI represents a form of protection for interpretative competence; from this point of view, technology is excluded from the processes of meaning construction. This aspect is in line with broader criticisms that highlight how algorithmic systems can encourage cognitive simplification and reduce opportunities for independent interpretation [35].
On the subject of age differences and academic role, it becomes apparent that younger researchers tend to establish a more pragmatic, functional relationship with AI, oriented towards efficiency and time management (such as writing, translation, and repetitive activities), while maintaining strong human control. Among academics of intermediate age and role, a more reflective approach prevails, in which AI can act as a disciplined interlocutor without replacing the researcher’s judgement. Among full professors, on the other hand, two distinct orientations emerge: on the one hand, a strong experimental curiosity, and on the other, maintaining greater critical distance to preserve the autonomy, originality and quality of traditional scientific knowledge.
Based on these different experiences, the strengths and weaknesses of each were explored from the interviewees’ perspectives. The respondents reveal a plurality of positions that revolve around two polar opposites: the first in which AI is considered useful for optimising and expanding research practices, and the second which views the use of AI as a source of uncertainty and risk.
Among the main advantages and in line with previous studies on the subject [2,10] that emphasise the role of AI as a tool for promoting efficiency and the rationalisation of work processes, the interviewees recognise the speed with which certain activities can be completed, especially those that are less conceptual and more repetitive, in addition to the ease of receiving linguistic and communicative support. One scholar reported using ChatGPT or Copilot “to speed up certain stages of writing and research”, while another one explained that they use AI “for proofreading or formatting in LaTeX, but only as technical support”. Others value AI’s ability to facilitate content conversion, stating: “I used WenQ… to turn an article into a presentation”. The advantages mentioned can clearly be understood in light of the growing need to save time [6] in practical operations; however, they do not replace activities with more theoretical content [34].
Some interviewees also emphasised its ability to generate insights for further exploration, while no mention was made of its interpretative potential.
Alongside the advantages, several weaknesses emerge. The first concerns reliability: all interviewees recognise the risk of errors, invention and opacity in algorithmic processes. A second element regards the superficiality of the content generated: as one participant highlights: “With artificial intelligence, you’ll probably do more research, but of poor quality”. One interviewee points out that AI tends to produce overly simplistic summaries: “It gave me too concise a summary… I went back and did it myself” highlighting the risk of a progressive loss of intellectual autonomy that can occur even unconsciously. This concern aligns with Ellaway’s [26] reflections and the need not to delegate conceptual and interpretative activities that are central to the development of scientific knowledge to AI.
A final aspect concerns the relational dimension, which many interviewees considered crucial. One researcher points out that AI cannot replace the scientific exchange that takes place in interpersonal modes: “I strongly believe in exchange… in science, physical networks work… that’s where scientific curiosity is born”. For example, according to the respondents, the community dimension which usually characterises academic practices such as conferences, seminars, and informal peer exchanges remains essential. These contexts, grounded in human interaction, allow for the circulation of ideas, the negotiation of meaning, and the emergence of original research questions in ways that AI cannot replicate.

5. Practices of Non-Use and Forms of Resistance

After analysing the main experiences amongst researchers who make use of AI, this section investigates the practices and reasons behind non-use and resistance to the forms of technological automation that characterise AI. The analysis examines the ways in which some interviewees adopt a more cautious attitude or a very limited use of artificial intelligence and algorithmic processes in their research work. The findings illustrate that lower adoption does not necessarily correspond to a rejection of the technology, but is more likely to be linked to forms of critical reflection on the consequences of automation in terms of scientific knowledge, methods and approaches used. Furthermore, it emerges that forms of resistance are associated with specific levels of algorithmic awareness, which activate its different dimensions, as conceptualised by Felaco [1,17]. The reasons expressed are based on various different considerations: fear of losing control or originality in scientific thinking; doubt about the validity of the results generated by algorithms; the issue of data ownership; and more generally, awareness of the limitations of the models available. These practices can be broadly grouped into the following main forms of resistance.
For a number of interviewees, especially sociologists, philosophers and scholars with a theoretical critical orientation, a form of epistemic–interpretative and, in some cases, socio-institutional resistance represents a means to preserve the intellectual autonomy and plurality of perspectives that underpin the construction of scientific knowledge. These researchers naturally make minor use of AI. The findings from the interviews highlight two particularly important aspects here. First, some researchers raised concerns about the long-term risks of conformity and methodological standardisation that automation may generate. As one respondent clearly states: “There is a negative aspect that could flatten research, because methodologically all research could become the same”.
Second, limiting the use of AI is described as a way to defend the slowness, depth, and complexity required by scientific reasoning, a position echoed by those who see deliberate restraint as a safeguard for intellectual autonomy.
These concerns are also linked to a broader, systemic issue: the growing standardisation of scientific work and the neoliberal logics of academic evaluation, which the massive adoption of algorithms and metrics could reinforce. As one interviewee explicitly argues:
It is not resistance to artificial intelligence, but resistance to this neoliberal academia that is proliferating. If appropriation is passive, we risk suffering its effects; if, on the other hand, it is active and critical, then academia can become a founding nucleus of a more equitable society”.
In line with this position are those who believe they are aware of the non-neutrality of the algorithms underlying AI, which are constructed on the basis of interests, values and approaches that are external to the world of scientific research [36].
In this case the reflective dimension of algorithm awareness emerges, concerning the ability to recognise bias, to evaluate processes and outputs dependent on the non-neutrality and socio-institutional implications of algorithmic systems. For a number of interviewees—particularly those investigating linguistics and social research methods—a more procedural form of reluctance prevails, rooted in expert knowledge of the limitations of models and a sense of scientific responsibility. One interviewee expresses mistrust based on specific technical expertise, reporting that he avoids using tools for scientific or predictive analysis because “they do not guarantee replicability or transparency of datasets”. For these researchers, the partial non-use of AI is linked to issues regarding the validity, reliability and verifiability of the results obtained. In this case, the interviewees demonstrate a high level of algorithmic awareness, primarily converging in the cognitive and reflective dimensions, which is linked to an in-depth understanding of the technical limitations of AI.
At times the fear of using AI also concerns more practical aspects, as shown by the statement of one researcher who states that: “I am afraid that the whole ethical process behind this writing could lead to it being identified as a product that is not mine (…) and then I could have problems with the journal, which might not publish it because the product appears to have been copied”. These concerns underscore the importance of human control and editorial responsibility in scientific processes [3,37] and also highlight the affective dimension of algorithm awareness, which recalls the role of emotion as a factor in apprehension and suspicion in interactions with algorithms.
Finally, a last, more heterogeneous approach, shared by historians, cultural sociologists and anthropologists, shows more sporadic and variable forms of resistance. One history professor, despite having created a customised prompt, emphasises the interpretative limitations of AI and the need to “train it in the language of the discipline.” This perspective stems from scepticism towards the automation of complex interpretative processes, although this aspect can be partially bridged through the personalised and highly guided use of AI. These practices demonstrate a level of algorithmic awareness that can be traced back above all to the behavioural dimension, in which the user develops strategies for controlling, adapting or mitigating the algorithm.
Overall, practices of non-use and selective use do not appear to be simple resistance to AI, but rather serve as expressions through which researchers activate forms of protection for the cognitive essence of their work. Hence, the desire to reduce the use of AI can be interpreted as a means of preserving the interpretative autonomy of expert knowledge. Although the culture of the scientific discipline involved appears to be an element determining resistance practices, certain patterns also emerge when considering the profiles of the various respondents. Resistance is expressed mainly, though not exclusively, by senior academics, probably since they have been socialised into more traditional research cultures, where the interpretative dimension of scientific work is not mediated by technology. Furthermore, as a result of their established status, they seem less exposed to the acceleration and performance pressures that increasingly characterise contemporary academic careers, so they feel less urgency to resort to such systems. Their cautious stance, therefore, reflects both a different career path and a conception of scientific work grounded in slower, more interpretive and artisanal forms of scientific knowledge production.

6. Configurations of Trust

The central part of the interview aimed at exploring the interviewees’ level of trust in algorithmic systems, as well as possible reasons behind uncertainty and mistrust in their relationship with AI. The respondents describe a complex relationship in which trust is at times granted, but sometimes revoked or remodelled depending on the tasks to be performed and the circumstances of use.
One point of view, particularly prevalent in scientific disciplines oriented towards data management (e.g., statistics, social research methodology, business organisation), is that of AI conceived as a technical support restricted to carrying out only well-defined tasks. In these contexts, a pragmatic form of trust emerges: interviewees trust the technology because it speeds up repetitive activities such as translation, linguistic verification, or bibliography preparation. In these uses, AI is not attributed any autonomous interpretative role, nor is it considered to possess scientific competence; rather, it functions as a tool that optimises routine processes while leaving expertise and judgement firmly in human hands. In this perspective, technology can automate certain steps, but human control remains essential. One researcher, for example, states they use ChatGPT to streamline certain stages of writing, but points out that they subsequently review “every line”. In contrast, for one social statistics scholar, trust is accompanied by the possibility of verifying the accuracy of the results. It is, in essence, a mode of trust that leaves interpretative authority entirely with the researcher and reaffirms the indispensability of human validation. However, this picture changes when AI is used for more analytical activities, such as data analysis: here, a greater sense of caution prevails, often linked to experiences of misleading results, as well as a lack of clear information on training data and the extreme variability of the results sometimes produced by such systems.
A lower level of trust is linked to greater technical knowledge of how these systems work, the models’ limitations and the opaque mechanisms that generate their outputs. For example, one linguistics professor notes that the probabilistic, non-replicable and non-verifiable nature of AI makes it unsuitable for specialised tasks: “hallucinations are not bugs but a feature of these systems and therefore cannot be eliminated… AI is not a calculator, while I can delegate to a calculator, I cannot delegate to these systems: they do not guarantee the same result”.
A second approach, mainly found in researchers from anthropology, history and sociology of cultural processes, perceives the relationship with AI as one based on experimentation, experience gained over time and a more dialogical approach aimed at testing hypotheses and conceptual ideas. In this case the use of AI activates a form of moderate trust, regulated by a strong sense of control and the centrality of human thought.
Finally, there is an orientation especially shared by critical sociologists and philosophers characterised by generalised mistrust, based on a broader systemic dimension linked to the socio-institutional and governance context in which these tools operate. In particular, some respondents believe there is the need to be cautious regarding non-transparent models and approaches based on interests outside the scientific community, such as those of the commercial corporations which, through the management and control of the digital platforms on which AI tools operate, end up influencing the processes of scientific knowledge production. However, as one researcher states: “the power of the scholar is increased because they have to be even better at understanding who is behind it… who is giving you the information?”. For these interviewees, the sense of mistrust is likely connected to a form of cognitive and reflective awareness of algorithmic processes, rooted in the understanding of the non-neutrality of algorithms and the biases and discrimination they can produce. This gives rise to a type of trust that goes beyond the technical functioning of AI to encompass the wider institutional and governance spheres that are responsible for designing and controlling AI systems. In other words, the main concern is not the output of AI models themselves, but the people who determine their characteristics and modes of operation.
Overall, trust configurations seem to be partly associated with disciplinary background and corresponding modes of AI tool use. More specifically, levels of trust vary depending on the type of tasks delegated to it, the degree of technical understanding of the tool and the level of algorithmic awareness that characterises each group. Moreover, although prevailing trends emerge in the configurations of trust, some scholars, such as sociologists, show a certain internal mobility between the more interpretative and critical positions. Some express a dialogical and controlled trust typical of interpretative approaches, while others seem to show more radical forms of caution, closer to critical and systemic perspectives. These positions are in line with the not always linear relationship between disciplinary approaches and AI.
With regard to other variables, unlike resistance practices, age and academic seniority do not generate well fixed and defined patterns. However, mistrust appears to be slightly more pronounced among older scholars with higher status, while younger researchers are generally more open to the operational uses of AI. However, even amongst this group, a certain level of caution and limited trust emerges.

7. Expert Judgement and Algorithmic Mediation

The dimension of trust is closely linked to the relationship between algorithmic automation processes and expert knowledge, as well as to the transformations, perceived or feared, that automation could introduce into the production of scientific knowledge. For this reason, the interviews also explored how scholars conceive the relationship between expert judgement and algorithmic judgement and how AI interacts with their intellectual autonomy.
The first position that emerges shows a clear distinction between expert knowledge and algorithmic output. For most respondents, the subjectivity of the researcher and their methodological and substantive preparation, as well as their ability to interpret, attribute meaning and construct connections, is an irreplaceable element and one that must remain at the centre of the research process. This point of view is typical among scholars in historical and interpretative disciplines, who consider expertise as irreplaceable and emphasise that human interpretation must remain primary, while AI can only operate within boundaries defined by the researcher. This asymmetry between human and machine roles is evident in the metaphors used by the interviewees. As one researcher states: “It is a semi-finished product. What artificial intelligence provides is, and cannot be anything other than, a semi-finished product. Then the task of rebuilding the entire castle remains ours”. Other respondents reinforce this stance by describing interaction with AI as a highly supervised and researcher-driven process.
One historian describes a mode of interaction in which human interpretation remains primary: “I have always interacted in such a way that it was not artificial intelligence that prepared the texts for me, but that I prepared the texts so that I could subject artificial intelligence to understanding what I was looking for, what I wanted, what I was dealing with, helping me to analyse my texts”.
This historian further elaborates on the implications for intellectual autonomy, responding: “I asked myself the question: am I giving up a share of my intellectual sovereignty? I value my personal originality, so I will never write anything dictated by AI. I use a customised prompt because I want AI to give me another point of view. Then I check it and study it. AI does not prepare texts for me: I prepare texts to submit to AI, to see how it interprets them”. This group also includes a social research methodologist who states that, “I am not afraid of artificial intelligence; I give it the rules”, highlighting that even outside the fields of history and interpretation, some scholars share the belief that AI should remain strictly subordinate to human supervision.
A second perspective, shared mainly by scholars in social research methodology, statistics and economics, interprets AI as a tool that can strengthen expert judgement rather than undermine it.
From their point of view, intellectual autonomy is not perceived as being under threat; on the contrary, it is perhaps strengthened through control of the activities sometimes entrusted to AI. As one researcher argues, AI tools “are used to do things faster that used to take time, but scientific work is something else”. For this group, it appears that awareness of AI’s limitations actually promotes the ability to exercise control.
Alongside these positions, a third emerges expressing more profound concerns. This position, held by theoretically oriented sociologists, philosophers and few historians, illustrates a fear of losing cognitive sovereignty, interpretative depth and a plurality of perspectives. Some sociologists fear that the widespread adoption of AI could lead to a flattening of scientific production, a reduction in interpretative plurality, or even a form of cognitive dependence: “The risk is that we neglect to train our minds and cognitive abilities”.
A similar concern is raised by another researcher, who highlights the superficiality of AI-generated content: “The problem is that these systems flatten everything; they work at a very superficial level. If you ask for a bibliographic search or a survey on a topic, the answer is always too general. For specialist work, they become almost useless”.
In summary, several different approaches emerge, each with its own distinctive nuances. Notably, the distinction between the first and third approaches is that the former views AI as a tool that is subordinate to the researcher’s interpretative control, whereas the latter considers AI to be a potential cause of cognitive flattening and, consequently, aims to create distance in order to safeguard the researcher’s cognitive autonomy. Disciplinary background is an important factor in defining these differences, but not in a rigid or deterministic way. Some historians, for example, while adopting the first approach that emphasises the centrality of human interpretation, also demonstrate a heightened awareness of critical issues associated with the third approach, particularly the risks of simplification induced by AI.
With regard to differences in age and academic role, the interviews reveal few variations in trust as differences in the way the relationship between expert knowledge and algorithmic knowledge is conceptualised. Younger researchers tend to express greater openness, but especially with regard to more mechanical activities, full professors in particular tend to adopt a more protective stance, based on the defence of intellectual sovereignty and established practices of knowledge production. Despite these different perspectives, all interviewees agree that expert judgement and conceptual interpretation must remain central to research, even when AI is used as a support.

8. Professional Identity and Autonomy in Scientific Research

A further aim of the study was to investigate whether and how the adoption of algorithmic tools influences the ways researchers define the autonomy and boundaries of scientific work, in terms of identity representation. The interviewees’ opinions on this matter differ. An initial analysis of the orientation of researchers interviewed found that a number of them believe that AI does not substantially alter their identity. For these individuals, AI expands certain operational functions rather than changing fundamental aspects of their roles. This position, shared primarily by scholars in social research methodology and statistics, recognises the instrumental usefulness of AI, but they also acknowledge clear limits between the contributions provided by AI and what cannot be delegated. Their statements illustrate this viewpoint: “It does not change your identity: you are still the one who makes all the important choices”; and “The research you can do involves getting your hands dirty, doing field research, observation and ethnography”. For this group, the researcher’s identity remains centred on their competence, physical presence during empirical investigation, and active participation in knowledge production. Thus, AI is assigned an operational support role in some activities, without altering the core expertise of the researcher’s role. Some interviewees also describe AI as a potential tool for reducing the bureaucratic burden increasingly associated with academic work. From this perspective, AI can be used defensively to carry out routine or purely administrative tasks, thereby freeing up time and energy for research. In contrast, a second expressed greater caution and fear, identifying AI as a possible threat to the specificity of the academic role. This approach, found among sociologists, especially those in interpretive or critical traditions and philosophers, sees identity as exposed and potentially at risk of distortion: “…it could lead to a distortion of the researcher, because the researcher might no longer be a researcher, but simply someone who gives commands…”. This perception is intertwined with the logic within the academic system, particularly related to hyper-productivity and performance metrics. Identity, according to the interviewees, does not necessarily change, but it appears to be under increasing pressure and AI intensifies existing tensions, particularly in light of current evaluation systems. The stage of a researcher’s career also influences the perception of vulnerability: “In the early stages of a career, use may be greater, but then it decreases”.
These concerns are intertwined with growing bureaucratic and institutional demands. As one participant asks, “Why should artificial intelligence change our role?”, suggesting that AI may simply bring to the surface an identity crisis that is already underway. Another expands on this idea: “The identity of the researcher, the love of research, the ability to do science freely and autonomously and to be open to plural contributions, has been threatened by processes that have nothing to do with artificial intelligence”.
In light of these two different perspectives, there appears to be a tension between the potential that AI offers in improving efficiency and the pressures on productivity inherent in the contemporary academic world. Some respondents who identified with the first approach, particularly those who use AI instrumentally for routine tasks, do not recognise the link between AI, bureaucratic acceleration and the neoliberal performance regime, but consider the technology as a tool that speeds up existing procedures. Others, however, particularly critical sociologists and several high-level scholars, interpret AI as a further element in the broader process of the “proletarianization” of academic work, intensifying metric-based evaluation and increasing bureaucratic work at the expense of in-depth, reflective scientific research [38,39].
Finally, some researchers take a different approach, aimed at preserving the specificity of the researcher’s role. This position, which is prevalent among historians and some sociologists of cultural processes, grounds identity on competence and the ability to develop theoretical reasoning, analysis and interpretation. As one of the interviewees states: “If there are fundamental aspects of our work that make us proud, we can say: ‘We are researchers, you are not’.” From this perspective, preserving such knowledge and skills requires adopting a reflective attitude towards automation mechanisms.
In the final part of the interviews, the scholars were asked to reflect on the future of scientific knowledge and the contribution that AI can make or, conversely, the obstacles it could create to research processes.
A key element, shared by all those interviewed, concerned the centrality and cognitive responsibility of the researcher, who is considered the fulcrum around which scientific activity is organised. The fundamental quality necessary for the future was identified as intellectual freedom: the ability not to be influenced by ideologies, and not to use AI to confirm one’s own prejudices, but to exploit it to broaden one’s views and horizons of meaning.
Many respondents believe that AI will primarily serve as a technical and operational enhancement tool, useful for speeding up formal practices that are increasingly part of academic work. From this point of view, by calibrating the relationship between procedural automation and the researcher’s interpretative activities, AI can improve research.
The risk that AI could distort the role of the researcher is considered only partially realistic: total integration is seen as unlikely, not least for economic reasons linked to the high costs attached to the most advanced AI systems.
A recurring theme is the urgent need to design structured training courses, which are considered essential for fostering awareness and adoption of AI in scientific research. Training is considered crucial both for the less digitally savvy generations to familiarise themselves with the tools, and for younger people who risk the opposite effect: an early dependence on automated systems that weakens traditional methodological and theoretical skills, which respondents consider the true foundations of scientific knowledge.
From this perspective, training is not only about technical skills, but also a broader process of critical socialisation to AI, aimed at supporting an understanding of the limitations, biases, social implications and characteristics of the actors involved in such processes.
Hence, the positions expressed by the interviewees can be interpreted as ways of preserving the specificity of the scientific role in a context of increasing automation. The distinction between what can be delegated and what must remain a human activity allows us to lay the foundations for understanding how the boundaries of scientific professionalism are being redefined in the contemporary age. As in the previous sections, disciplinary differences also emerge here, although with some overlaps: some scholars tend to position themselves differently depending on the traditions of their scientific discipline, even if several respondents combine elements from more than one stance. For example, some sociologists tend to take the view that their identity is under pressure, while others lean towards a position focused on preserving the specificity of the researcher’s role—and their ability to develop interpretative processes. The interviews also reveal variations in the definition of the researcher’s identity depending on age and academic status. While these differences are not particularly significant or rigidly defined, many senior researchers seem particularly concerned with protecting their scientific identity. Experiencing the technological transition and the gap between the traditional methods of carrying out research in academia more directly, they tend to perceive AI as a potential threat to their autonomy and their sense of identity as researchers.

9. Conclusions

The findings obtained in this study offer an articulate framework for interpreting certain aspects of the relationship between scientific research work and the use of AI. Specifically, based on the dimensions investigated in the study, it is possible to identify different certain key orientations.
Given the complexity of the phenomena and the multiple associations among uses, resistance practices, trust, expert judgement and professional identity, these perspectives capture only the prevailing features. Indeed, as clearly shown in the previous sections, there are overlaps and intersections that cannot be traced to mutually exclusive groups. According to an initial instrumental orientation, AI is conceived as a functional resource for increasing efficiency and productivity, used intensively but always in a carefully controlled manner. Research tends to be more standardised and proceduralised [6,34], as AI accelerates the management of sources, the preparation of texts, the production of teaching materials and, at times, certain data analysis operations. This results in a more rationalised research structure, with compressed timescales and a growing centrality of the operational phases. Within this vision of AI, the cognitive and behavioural dimensions of algorithmic awareness prevail [1].
In this approach, trust is pragmatic and limited to routine and mechanised activities; expert judgement remains firmly in human hands; professional identity is not threatened; and forms of resistance are not particularly pronounced. This approach is prevalent in methodological and statistical disciplines and appears to be more common among mid-career researchers, for whom AI serves as an operational support for workload management.
The second dialogical and experimental perspective sees AI as acting as a “guided” interlocutor, committed to generating cognitive insights that the researcher interprets, modulates and adapts. In this case, a conversational approach emerges with generative models that activate modes of cognitive co-construction [29]. Here, the behavioural and reflective dimensions of algorithmic awareness emerge most prominently.
Within this approach, the relationship of trust is moderate but the use of AI is guided by human control; identity does not appear to be threatened, but some forms of resistance emerge to avoid the risks of superficiality that can be induced by AI.
This orientation belongs more to the historical, anthropological and interpretative disciplines and cuts across generations, but is more common among mid-career scholars and some senior scholars interested in forms of experimentation that are still managed by the researcher.
This is followed by a critical-defensive approach, characterised by infrequent use of AI. In this case, AI is seen as a risk to interpretative depth, originality of thought and the slowness necessary for complex cognitive processes [26]. Scientific work, understood as a tailormade and localised activity in line with traditionalperspectives [40], must not be eroded by the generative power of AI. This orientation embodies resistance to the logic of standardisation and homologation [27], confirming that research cannot be delegated to a sequence of formalizable operations. Here, the reflective dimension of algorithmic awareness, linked to the defence of intellectual autonomy, clearly prevails. From this perspective, the level of trust is limited; expert judgement is perceived as vulnerable to standardisation; the identity of the researcher is under pressure; and forms of resistance relate both to the conception of scientific knowledge and on an institutional level.
The often cautious use of AI is indeed accompanied by attention to the socio-institutional context in which the technology is produced and incorporated. Reflection focuses on the governance logics underlying AI systems—such as the role of commercial platforms and neoliberal university evaluation processes—drawing on analyses of algorithmic power and socio-economical dynamics characterising digital infrastructures [41,42]. This perspective is shared especially by critical sociologists, philosophers and, in terms of age and status characteristics, many senior scholars who express greater caution, experiencing the technological transition more directly and perceiving AI as a potential risk to intellectual freedom.
Alongside these differences, some shared elements emerge.
The first concerns the belief, common to all disciplines, ages and academic roles, that there are certain activities that cannot be delegated to AI: theoretical argumentation and conceptual construction, methodological rigour and, more generally, all practices that require expert judgement, intuition and critical sensitivity [15]. The question that runs through all profiles is therefore what remains non-delegable, i.e., which part of scientific activity can be rationalised without losing meaning.
The second shared element concerns the need for algorithmic awareness and, specifically, structured forms of AI literacy. Everyone recognises the urgency of developing pathways to understand the limitations, biases and social implications of models aimed at both lecturers with lower digital skills and younger researchers, who risk falling into premature dependence on automated tools.
Given the qualitative nature of this study and the limited composition of the group of interviewees, the results of this research cannot be generalised. However, the work could be extended to include a larger and more diverse number of respondents, as well as a broader comparison with scholars belonging to the hard sciences, in order to identify points of contact and differences and to see whether the trends identified here are also confirmed in contexts characterised by more data-driven scientific practices and a different or similar relationship between technological processes and modes of scientific knowledge production. At the same time, given the plurality of approaches that have emerged, it becomes necessary to explore how researchers interpret scientific work and what conceptions of knowledge are formed in relation to the use of AI. In this perspective, the results of this research indicate the need for an extension “in breadth”’ towards further disciplines and show the importance of an extension “in depth” within each disciplinary culture. The variety of approaches observed, and the fact that some researchers adopt multiple perspectives, highlights the internal heterogeneity and nuances that characterise individual scholars’ relationships with AI. Therefore, future research should explore how each disciplinary culture constructs its relationship with AI, how researchers define the boundaries of scientific work and how emerging forms of knowledge are configured within these specific scientific contexts. This would be useful, not only to understand the different ways in which automation is incorporated into research practice, but also to analyse how the transformations introduced by AI interact with the specific disciplinary cultures of the social sciences [43,44] and how these cultures develop their own ways of analysing and interpreting social phenomena.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (IRB) of the ISP Institute (Protocol Code: ISP-IRB-2025-6, Date of Approval: 28 July 2025).

Data Availability Statement

Due to the qualitative nature of this research and the confidentiality guaranteed to participants, interview transcripts are not publicly available. Anonymized excerpts may be obtained from the corresponding author upon reasonable request.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Felaco, C. Researching algorithm awareness: Methodological approaches to investigate how people perceive, know, and interact with algorithms. Math. Popul. Stud. 2024, 31, 267–288. [Google Scholar] [CrossRef]
  2. Liao, Z.; Antoniak, M.; Cheong, I.; Cheng, E.Y.Y.; Lee, A.H.; Lo, K.; Zhang, A.X. LLMs as research tools: A large scale survey of researchers’ usage and perceptions. arXiv 2024, arXiv:2411.05025. [Google Scholar]
  3. Chakravorti, T.; Wang, X.; Venkit, P.N.; Koneru, S.; Munger, K.; Rajtmajer, S. Social Scientists on the Role of AI in Research. arXiv 2025, arXiv:2506.11255. [Google Scholar] [CrossRef]
  4. Costa, A.P.; Bryda, G.; Christou, P.A.; Kasperiuniene, J. AI as a Co-researcher in the Qualitative Research Workflow. Int. J. Qual. Methods 2025, 24, 16094069251383739. [Google Scholar] [CrossRef]
  5. Jung, M.; Zhang, A.; Fung, M.; Lee, J.; Liang, P.P. Quantitative Insights into Large Language Model Usage and Trust in Academia. arXiv 2024, arXiv:2409.09186. [Google Scholar]
  6. Wang, H.; Fu, T.; Du, Y.; Gao, W.; Huang, K.; Liu, Z.; Zitnik, M. Scientific discovery in the age of artificial intelligence. Nature 2023, 620, 47–60. [Google Scholar] [CrossRef]
  7. Pigola, A.; Scafuto, I.C.; da Costa, P.R.; Nassif, V.M.J. Artificial Intelligence in academic research. Int. J. Innov. 2023, 11, e25408. [Google Scholar] [CrossRef]
  8. Ligo, A.K.; Rand, K.; Bassett, J.; Galaitsi, S.E.; Trump, B.D.; Jayabalasingham, B.; Linkov, I. Comparing the emergence of technical and social sciences research in AI. Front. Comput. Sci. 2021, 3, 653235. [Google Scholar] [CrossRef]
  9. Pavaloiu, A.; Köse, U.; Boz, H. How to apply artificial intelligence in social sciences. In Proceedings of the IASOS-Congress of International Applied Social Sciences, Uşak, Turkey, 21–23 September 2017. [Google Scholar]
  10. Pérez-Penup, L.; Romero, I. The Usefulness of AI in Academic Research. Rev. Educ. 2025, 49, 2. [Google Scholar] [CrossRef]
  11. Becker, H.S. Tricks of the Trade; University of Chicago Press: Chicago, IL, USA, 2007. [Google Scholar]
  12. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
  13. Varona, D.; Suárez, J.L. Discrimination, bias, fairness, and trustworthy AI. Appl. Sci. 2022, 12, 5826. [Google Scholar] [CrossRef]
  14. Collins, H.; Evans, R. The third wave of science studies. Soc. Stud. Sci. 2002, 32, 235–296. [Google Scholar] [CrossRef]
  15. Collins, H.; Evans, R. Rethinking Expertise; University of Chicago Press: Chicago, IL, USA, 2007. [Google Scholar]
  16. Eslami, M.; Rickman, A.; Vaccaro, K.; Aleyasen, A.; Vuong, A.; Karahalios, K.; Hamilton, K.; Sandvig, C. “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15), Seoul, Republic of Korea, 18–23 April 2015; pp. 153–162. [Google Scholar]
  17. Felaco, C. Lungo la scala di generalità: Le dimensioni della consapevolezza algoritmica. Sociol. Ital. 2022, 19–20, 123–134. [Google Scholar]
  18. Gruber, J.; Hargittai, E. The importance of algorithm skills for informed Internet use. Big Data Soc. 2023, 10, 20539517231168100. [Google Scholar] [CrossRef]
  19. Luhmann, N. La Fiducia. Un Meccanismo Di Riduzione Della Complessità Sociale; Il Mulino: Bologna, Italy, 2002. [Google Scholar]
  20. Giddens, A. The Consequences of Modernity; Stanford University Press: Stanford, CA, USA, 1990. [Google Scholar]
  21. Kopsaj, V. Blockchain E Fiducia: La Tecnologia Che Riscrive Il Sociale; The Lab’s Quarterly: Pisa, Italy, 2025; pp. 1–18. [Google Scholar]
  22. Mitchell, T. Trust and Transparency in Artificial Intelligence. Philos. Technol. 2025, 38, 87. [Google Scholar] [CrossRef]
  23. Pendenza, M. Le fonti della fiducia: Tra partecipazione associativa e risorse personali. Sociol. e Politiche Soc. 2008, 11, 139–157. [Google Scholar]
  24. Li, Y.; Wu, B.; Huang, Y.; Luan, S. Developing trustworthy artificial intelligence: Insights from research on interpersonal, human-automation, and human-AI trust. Front. Psychol. 2024, 15, 1382693. [Google Scholar] [CrossRef]
  25. Masters, K. Artificial intelligence and the death of the academic author. Med. Teach. 2025, 47, 1563–1565. [Google Scholar] [CrossRef]
  26. Ellaway, R.H. What do we become? Artificial intelligence and academic identity. Med. Teach. 2025, 47, 1569–1570. [Google Scholar] [CrossRef]
  27. Leslie, D. Understanding Artificial Intelligence; MIT Press: Cambridge, MA, USA, 2021. [Google Scholar]
  28. Selenko, E.; Bankins, S.; Shoss, M.; Warburton, J.; Restubog, S.L.D. Artificial Intelligence and the Future of Work: A Functional-Identity Perspective. CDPS 2022, 31, 272–279. [Google Scholar] [CrossRef]
  29. Thominet, L.; Amorim, J.; Acosta, K.; Sohan, V.K. Role play: Conversational roles as a framework for reflexive practice in AI-assisted qualitative research. J. Tech. Writ. Commun. 2024, 54, 396–418. [Google Scholar] [CrossRef]
  30. Silverman, D. Doing Qualitative Research; Sage: London, UK, 2008. [Google Scholar]
  31. Diana, P.; Montesperelli, P. Analizzare Le Interviste Ermeneutiche; Carocci: Roma, Italy, 2005. [Google Scholar]
  32. Addeo, F.; Montesperelli, P. Esperienze Di Analisi Di Interviste Non Direttive; Aracne: Roma, Italy, 2007. [Google Scholar]
  33. Susskind, R.; Susskind, D. The Future of the Professions; Oxford University Press: Oxford, UK, 2020. [Google Scholar]
  34. Gourlay, L. Generative AIs, more-than-human authorship, and Husserl’s phenomenological ‘horizons’. In Proceedings of the Fourteenth International Conference on Networked Learning, Valletta Campus, Malta, 15–17 May 2024. [Google Scholar]
  35. Morozov, E. To Save Everything, Click Here; PublicAffairs: New York, NY, USA, 2013. [Google Scholar]
  36. Liu, Z. Sociological perspectives on artificial intelligence: A typological reading. Sociol. Compass 2021, 15, e12851. [Google Scholar] [CrossRef]
  37. Salvagno, M.; Taccone, F.S.; Gerli, A.G. Can AI help scientific writing? Crit. Care 2023, 27, 75. [Google Scholar] [CrossRef] [PubMed]
  38. Menzies, H.; Newson, J. No time to think. Time Soc. 2007, 16, 83–98. [Google Scholar] [CrossRef]
  39. Ramirez, F.O.; Christensen, T. The formalization of the university. High. Educ. 2013, 65, 695–708. [Google Scholar] [CrossRef]
  40. Collins, H. Tacit and Explicit Knowledge; University of Chicago Press: Chicago, IL, USA, 2010. [Google Scholar]
  41. Kitchin, R. Thinking critically about and researching algorithms. In The Social Power of Algorithms; Routledge: London, UK, 2019; pp. 14–29. [Google Scholar]
  42. Zuboff, S. The Age of Surveillance Capitalism; PublicAffairs: New York, NY, USA, 2019. [Google Scholar]
  43. Knorr Cetina, K. Epistemic Cultures; Harvard University Press: Cambridge, MA, USA, 1999. [Google Scholar]
  44. Becher, T.; Trowler, P. Academic Tribes and Territories; McGraw-Hill Education (UK): London, UK, 2001. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.