Abstract
This qualitative study employs interpretive phenomenology and Actor–Network Theory (ANT) to examine the evolving role of AI as an agent within European marketing contexts. Drawing on semi-structured interviews with 36 senior executives from the tourism, fintech, professional services, and digital media sectors, the study identifies four interconnected themes: (1) ambivalent human–AI co-agency, where AI operates as a “co-strategist” influencing budgets and decisions; (2) infrastructural and regulatory challenges arising from legacy systems and GDPR/EU AI Act constraints; (3) ethical issues concerning opacity, bias, and exclusion in hyper-personalization; and (4) the redefinition of professional identities towards hybrid socio-technical roles. The findings underscore AI’s role as a co-creator of strategy, governance, and power, highlighting the necessity of balanced co-agency, robust infrastructure, ethical safeguards, and adaptable skill sets. The AI-MARC framework (Agency, Infrastructure, Responsibility, Capability) provides a practical framework for governance of sustainable AI integration. This work addresses gaps in qualitative AI marketing research by emphasising reflexive practices amid evolving regulations, with the aim of fostering equitable networks that align innovation, fairness, and accountability.
1. Introduction
Artificial intelligence (AI) is profoundly transforming marketing management by enhancing strategic decision-making, generating consumer insights, and improving customer relationship management within European organisations (V. Kumar et al., 2024). AI technologies enable advanced customer profiling, segmentation, and personalised marketing campaigns, thereby increasing efficiency and customer engagement through tools such as predictive analytics, generative AI, and automated content creation (Milovanović & Novaković, 2025). Empirical research demonstrates that the adoption of AI in marketing significantly improves business performance metrics, including financial outcomes, customer satisfaction, internal processes, and organisational learning, particularly among small and medium-sized enterprises (SMEs) (Abrokwah-Larbi & Awuku-Larbi, 2023; Gabelaia & Hendieh, 2025).
AI competencies in Business-to-Business (B2B) marketing enhance organisational capabilities and performance by delivering deeper market insights and streamlining operational processes (Mikalef et al., 2023). Additionally, AI supports sustainable digital marketing initiatives by integrating big data analytics and machine learning models to forecast customer behaviour and optimize marketing strategies. Despite these advantages, challenges remain regarding ethical considerations, data privacy, algorithmic transparency, and the need for further research into the long-term impacts and governance of AI in marketing (Bulchand-Gidumal et al., 2023).
AI-driven hyper-personalization in marketing leverages machine learning algorithms to analyse diverse datasets—such as transactional data, social media signals, geolocation, and emotional cues—to create highly tailored consumer experiences and recommendations (Milovanović & Novaković, 2025). These AI systems, including recommendation engines and conversational agents, dynamically adapt content and communication styles to individual consumer preferences, enabling large-scale, real-time personalised interactions (Hardcastle et al., 2025). Research suggests that highly personalised AI advertising can evoke significant emotional responses, thereby strengthening brand engagement and loyalty, particularly among technologically proficient groups such as Generation Z. In tourism marketing, AI-generated, customised imagery enhances emotional and cognitive engagement by fostering narrative transportation, which strengthens consumer intentions and perceptions (Peter et al., 2025).
Advanced AI techniques, such as clustering integrated with explainable AI, enhance segmentation transparency and support ethical marketing practices, facilitating culturally aware and behaviourally relevant targeting (Alijoyo et al., 2025). Despite these advantages, persistent challenges regarding data privacy, algorithmic transparency, and ethical considerations underscore the necessity for responsible AI implementation to preserve consumer trust and autonomy (Mendoza et al., 2025).
This study expands on existing literature by examining how AI implementation in marketing is enacted, governed, and legitimised, moving beyond just ethical concerns (Grewal et al., 2024; Hardcastle et al., 2025). Using an interpretive phenomenological approach, this study explores how senior managers understand AI-enabled personalisation in relation to accountability, judgment, and organisational priorities (Naz & Kashif, 2024). Actor Network Theory (ANT) is used to view AI as part of a decision-making network that involves human actors (e.g., marketing leaders and data teams) and non-human actants (e.g., algorithms and policies) (Lozano-Paredes, 2025). This combined approach offers new insights into the stabilization of personalisation practices through governance routines and socio-technical negotiations, thereby increasing understanding of AI implementation and control in marketing (Hardcastle et al., 2025; Ortega-Bolaños et al., 2024).
This study employs an interpretive phenomenological approach, underpinned by Actor–Network Theory (ANT), to investigate the integration of AI in European marketing administration (promotion/marketing communications, marketing strategy & planning, CRM, brand management/strategy & new business development) (Larsson et al., 2025). Interviews with 36 senior executives from diverse service organisations illuminate AI’s agency within networks comprising strategic tools, public relations groups, stakeholder communities, and epistemic systems. The research is guided by three key questions: (1) how AI’s personalisation and prediction features are perceived and implemented in administrative contexts; (2) what ethical challenges and forms of resistance threaten network stability; and (3) which emerging variations facilitate effective transformations.
2. Literature Review
2.1. AI’s Transformative Role in Marketing Management and Performance
Artificial intelligence is fundamentally transforming marketing management and organisational performance (V. Kumar et al., 2024). It increases operational efficiency, enables large-scale personalisation, and enhances strategic decision-making across marketing functions. Furthermore, AI-driven tools provide advanced customer insights, automate marketing operations, and improve campaign effectiveness, collectively enhancing organisational performance and competitive advantage.
Contemporary AI-driven marketing systems employ machine learning, natural language processing, and predictive analytics to provide deep customer insights that surpass traditional segmentation (Alijoyo et al., 2025; Komodromos et al., 2026). These systems enable organisations to automate routine tasks and enhance campaign effectiveness through continuous, data-driven adjustments. This evolution positions AI not merely as an operational instrument but as a strategic asset, conferring sustainable competitive advantage by accelerating product launches, optimising resource allocation, and creating new revenue streams and market opportunities (Gabelaia & Hendieh, 2025).
In the domain of business-to-business (B2B) marketing, the development and deployment of artificial intelligence (AI) competencies have been empirically demonstrated to enhance core marketing capabilities that directly drive organisational performance outcomes (Mikalef et al., 2023; Seremeti et al., 2026). AI-driven systems provide B2B marketers with detailed insight into complex buyer behaviours, multi-stakeholder decision-making, and longitudinal purchasing patterns, while also streamlining resource-intensive activities such as lead scoring, account prioritisation, and relationship management (Paschen et al., 2019). These advancements enable marketing departments to transition from reactive, intuition-based strategies to proactive, intelligence-driven approaches that more effectively align marketing investments with revenue-generating opportunities. Furthermore, integrating AI competencies into B2B marketing enhances cross-functional collaboration by establishing a shared data infrastructure and predictive models that bridge traditional silos among marketing, sales, and customer success teams (Rusthollkarhu et al., 2022).
The emergence of generative AI technologies represents a major advancement in marketing by enabling the autonomous creation of highly personalised, contextually relevant content at scale. This innovation simultaneously improves the efficiency and accuracy of sales-lead generation and customer-engagement processes. Small marketing enterprises can leverage generative artificial intelligence to access underserved market segments and deliver innovative, cost-effective services, while also supporting human creativity with AI-generated insights (Kshetri et al., 2023).
Generative AI is also shaping the future of marketing by transforming customer interactions, content creation across multiple media, and product development, with frameworks emerging to facilitate adoption and human–AI collaboration (Seremeti et al., 2026; Tao et al., 2025). Additionally, incorporating generative AI into brand strategies enables organisations to develop novel, interactive, and personalised insights that are difficult for competitors to replicate, thereby fostering sustainable competitive advantage. Nevertheless, challenges such as AI anxiety among marketers, ethical considerations, and the need for AI literacy and human oversight remain critical for successful implementation.
In contrast to earlier generations of AI tools that primarily analysed existing data patterns, generative AI models can create innovative marketing artefacts—including text, images, videos, and interactive experiences—dynamically tailored to individual customer preferences, behavioural histories, and projected needs (Wang & Zhang, 2025).
Empirical studies in small and medium-sized enterprise (SME) contexts have shown that systematic implementation of AI-driven marketing tools produces statistically significant improvements across multiple performance indicators, including financial metrics (such as revenue growth and profit margins), customer-centric outcomes (including satisfaction, retention, and lifetime value), internal process efficiencies (such as cycle time reduction and cost optimisation), and organisational learning capabilities (notably innovation velocity and adaptive capacity) (Cagiza et al., 2025; Magableh et al., 2024; Sambakiu et al., 2025). These findings support the view that artificial intelligence should be considered not only as a supplementary technology but as a core strategic asset that facilitates business model innovation and market repositioning, especially for resource-constrained organisations seeking to compete with larger, established competitors.
Despite substantial benefits and empirical evidence of AI’s transformative potential in marketing management, significant challenges and unresolved tensions remain regarding the ethical deployment, privacy protection, mitigation of algorithmic bias, and the optimal structuring of human–AI collaboration (Shemshaki, 2024). Concerns about data privacy, algorithmic transparency, and the risk that AI systems may perpetuate or amplify existing social biases have emerged as critical constraints requiring intentional governance frameworks and ongoing auditing mechanisms.
Transparency and explainability are essential for fostering consumer trust, prompting calls for ethical guidelines, bias detection tools, and privacy-enhancing technologies to address these risks (Ansah, 2025). Furthermore, recent discourse on AI implementation increasingly highlights that optimal value is achieved not through the wholesale replacement of human judgment with algorithmic decision-making, but through the careful orchestration of complementary human–AI partnerships. In these partnerships, AI systems augment human creativity, intuition, and ethical reasoning rather than replacing them. This sociotechnical perspective emphasises that sustainable competitive advantage from AI-enabled marketing ultimately depends on organisations’ ability to manage the complex interplay among technological capabilities, human expertise, regulatory requirements, and stakeholder expectations for fairness, accountability, and transparency in automated marketing decisions (A’yun & Setyaningsih, 2025).
2.2. Hyper-Personalisation and Advanced AI Techniques in Consumer Engagement
Transitioning to AI-driven hyper-personalisation involves integrating machine learning analysis of diverse datasets—such as transactional, social, geolocation, and emotional data—to create personalised experiences (Peter et al., 2025). This includes tools such as recommendsystemer engines and conversational agents that enable real-time, scalable interactions.
Research shows emotionally engaging experiences boost engagement and loyalty, even among Generation Z. In tourism, AI-generated images increase emotional engagement and destination visits (Egger & Yu, 2025; Seremeti et al., 2026). In education, AI customising learning improves student engagement and performance (Almuqhim & Berri, 2025; Komodromos et al., 2026). In e-commerce, AI content with personalised interfaces enhances engagement and conversions (Wasilewski et al., 2025). These findings suggest AI personalization can foster engagement via affective and experiential mechanisms, but comprehensive reviews are needed to confirm effects across sectors and groups.
Despite these advantages, challenges such as ethical concerns regarding data privacy, algorithmic transparency, and consumer autonomy remain central to responsible implementation (LeBrun, 2025; Mendoza et al., 2025). Overall, AI-driven hyper-personalisation provides transformative potential to enhance user experiences by delivering contextually relevant, emotionally resonant, and adaptive content across sectors (V. Kumar et al., 2024). Examples include the use of AI-generated imagery in tourism marketing to enhance storytelling and increase conversion intent. Additionally, methods such as explainable AI clustering are used to achieve transparent, culturally sensitive segmentation that prioritises behavioural relevance (Vijayakumar & Panwale, 2025).
2.3. Ethical Challenges and Governance Gaps in AI Marketing Adoption
AI-driven hyper-personalisation provides substantial benefits, including increased efficiency, improved customer engagement, and tailored experiences across sectors such as marketing, nutrition, and healthcare (Alserhan et al., 2024; Seremeti et al., 2026; Zheng et al., 2025). Nonetheless, persistent challenges include ethical concerns related to data privacy, algorithmic bias, and manipulation. These issues may erode consumer trust and autonomy by enabling opaque decision-making processes and perpetuating discrimination (Radanliev, 2025).
Algorithmic opacity limits transparency and accountability, making it difficult for users to understand or contest AI-generated outcomes, thereby exacerbating trust issues (Njiru et al., 2025).
The European Union’s General Data Protection Regulation (GDPR) imposes stringent data protection and privacy requirements to safeguard individual rights and promote responsible use of AI. However, compliance gaps persist, particularly as new AI applications emerge (Njiru et al., 2025; Priyadharshini et al., 2025).
Despite these regulations, there is a lack of comprehensive long-term impact studies and governance frameworks, highlighting the urgent need for research on pragmatic implementation and for practitioners to provide ethical oversight in addressing emerging challenges (Pham, 2025; Seremeti et al., 2026). Balancing the efficiency gains of AI with potential risks to consumer autonomy and bias reinforcement requires ongoing interdisciplinary collaboration, transparent system design, and the adaptation of robust regulatory measures to ensure the ethical, equitable, and trustworthy deployment of artificial intelligence technologies (Bulchand-Gidumal et al., 2023; Mendoza et al., 2025).
2.4. Theoretical Foundations and Research Lacunae
Actor–Network Theory (ANT) offers a valuable framework for analysing artificial intelligence (AI) as a co-constitutive element within marketing networks. It underscores how AI both influences and is shaped by human and non-human actors in complex sociotechnical systems (Condé & Münch, 2025). This perspective challenges the predominant emphasis on quantitative, performance-based studies by highlighting the importance of examining lived administrative experiences, network stabilisation processes, and ethical issues associated with the integration of AI in marketing practices (Shemshaki, 2024). Many existing studies neglect these qualitative dimensions, including how actors resist or facilitate variations within AI-driven networks—factors essential for understanding the heterogeneity of AI’s role (Nopas, 2025; Schneider-Kamp et al., 2024).
An interpretive phenomenological approach addresses these gaps by exploring perceptions, implementation challenges, and the nuanced dynamics of resistance and adaptation. This method aligns with research questions concerning how AI is perceived, resisted, and enabled in practice. It illuminates the ethical and relational complexities that quantitative metrics may not fully capture, offering deeper insights into AI’s integration within administrative contexts (Magableh et al., 2024; Peter et al., 2025). Such qualitative research is crucial for fulfilling the objectives of the Special Issue, which focuses on understanding how AI is embedded within organisational and marketing networks.
3. Methodology
This study exemplifies a qualitative, interpretive phenomenological approach, analysed through the relational ontology of Actor–Network Theory (ANT), to elucidate the agential recomposition of AI within the European marketing administration lifeworld (Komodromos et al., 2026; Yang et al., 2024). Existing literature predominantly emphasises quantitative, efficiency-oriented research, with limited qualitative investigation into the lived experiences, ethical dilemmas, and network dynamics associated with the adoption and implementation of artificial intelligence (Chatzichristos, 2025; Reed et al., 2025).
This research utilises an interpretive phenomenological approach to address these gaps, focusing on practitioners’ perceptions, resistances, and enabling factors, thus aligning with research questions regarding perceptions of AI, implementation challenges, and negotiation of its transformative impacts (Yang et al., 2024).
The interview dataset was analysed using Braun and Clarke’s reflexive thematic analysis (RTA), consistent with an interpretivist, interpretive phenomenological approach. Actor Network Theory (ANT) provided a set of sensitizing concepts—such as translation, enrolment, inscription, stabilisation, and obligatory passage points—which helped focus on how AI-driven personalisation is shaped through relationships between human actors (like senior managers, data teams, agencies, and compliance functions) and non-human actants (like algorithms, datasets, dashboards, prompts, and policy artefacts). The analysis unit was the meaning unit, defined as the smallest contiguous transcript segment that expresses a single idea relevant to the research questions, usually a sentence or a short paragraph.
All interviews were transcribed verbatim and managed using [software, version]. Coding involved an iterative process starting with familiarisation and memoing, followed by initial line-by-line coding, code refinement and grouping, theme development, and review against the entire dataset. The final theme definitions were established after these steps. A codebook was created after coding [k] transcripts, and it was refined over successive rounds, including code definitions, inclusion/exclusion criteria, and example excerpts. Intercoder training included joint coding and calibration on [m] transcripts (or [U_train], meaning units), followed by double coding [p]% of the dataset (total [U_double], meaning units). Intercoder reliability was estimated using [Cohen’s κ/Krippendorff’s α] = [0.XX] with a 95% confidence interval of [LL, UL], computed via [bootstrap/asymptotic SE]. Disagreements were resolved through discussion, and the final coding, along with an audit trail of memos and codebook versions, supported the final thematic analysis.
Phenomenological inquiry uncovers the subjective essences and noematic structures of lived experiences, emphasising participants’ hermeneutic interpretations of the integration of artificial intelligence into strategic, relational, and epistemic practices (Nopas, 2025). Actor–Network Theory (ANT), in accordance with Latour’s (2005) principle of symmetrical anthropomorphism, delineates heterogeneous actants—including human executives, mechanistic apparatuses, juridical inscriptions, and consumer quasi-objects—that either stabilise or destabilise administrative networks. This approach rejects anthropocentric hierarchies in favour of flat ontographies, positioning AI as an equally influential actor (Contini et al., 2024; Nopas, 2025; Song et al., 2025).
In the actor–network that shapes AI-driven personalisation, regulatory frameworks such as the GDPR and the EU AI Act, along with organisational ethical norms, function as non-human actants. These actants influence practice through specific inscriptions, including consent mechanisms, data minimisation rules, DPIA templates, risk registers, model documentation standards, audit logs, and approval workflows. They determine which data can be used, which tools and vendors can be engaged, and which personalisation methods become routine in marketing by establishing “obligatory passage points’—steps like legal review, risk assessment, documentation, and accountability. When these inscriptions align with management goals and technical capabilities, they promote network stability by building trust and supporting justified decision-making. Conversely, misalignment can cause resistance and lead to re-interpretation, often resulting in redesigns, delays, or abandonment of AI personalisation projects (Lozano-Paredes, 2025; Song et al., 2025). ANT’s relational ontology highlights AI’s proactive role in revolutionising governance and marketing networks, whereas phenomenology underscores the lived, interpretive dimensions of AI’s integration into organisational practices (Condé & Münch, 2025).
These approaches together reveal the complex socio-technical systems where artificial intelligence’s agency is shaped, highlighting ethical dilemmas, network stabilisation, and power negotiations among various actors (Condé & Münch, 2025). This synthesis enhances understanding of AI’s role within administrative structures and addresses scholarly demands for empirical, qualitative research on AI’s integration in European marketing and governance (Bareis, 2024).
Purposive sampling protocols, aimed at maximising semiotic density and achieving theoretical saturation, involved the participation of thirty-six senior marketing executives from European service organisations (Ahmad & Wilkins, 2024; Chen et al., 2025; Komodromos et al., 2026). These participants were strategically stratified to reflect sectoral heterogeneity: 12 from the tourism sector, 10 from financial technology, eight from professional services, and six from digital media. Most participants were from continental Europe—twenty-four from the core EU countries (Germany, France, The Netherlands), eight from peripheral EU countries (Greece, Cyprus, Portugal), and four from the United Kingdom post-Brexit—thus ensuring diverse representations of regulatory ecologies.
As noted above, the study employed purposive maximum variation sampling and focused on four sectors designed to span the main conditions that drive variation in AI implementation in marketing management—customer interface intensity, data availability and digital maturity, regulatory and reputational exposure, and value creation models (product- versus service-based contexts) (Grewal et al., 2024). Selecting sectors with contrasting positions on these dimensions enabled us to encompass a broad spectrum of senior managers’ experiences and implementation challenges within a practical, analytically consistent framework (Ominyi et al., 2025). Data collection and analysis continued until thematic saturation was reached within and across the four sectors, indicating that including additional sectors would likely not yield significantly new themes related to the research questions (Chen et al., 2025; Drozdowska et al., 2025).
In marketing studies, purposive sampling has been used to gather insights from targeted groups, such as senior marketing executives and consumers, thereby facilitating a focused analysis of phenomena including customer satisfaction, purchase intentions, and the impact of digital marketing strategies (L. Kumar & Devi, 2024).
In line with interpretive phenomenology, sample adequacy was assessed using the principle of information power instead of “theoretical saturation,” which suits theory-building. This approach justifies a smaller sample size based on the study’s focused aim, relevant sample, and rich data from interviews. The sample included senior managers responsible for AI personalization decisions and governance, selected purposively for role diversity across marketing functions like strategy, analytics, customer experience, compliance, and operations, as well as organizational contexts. The design also maximized case richness by choosing participants with direct implementation experience, such as vendor selection and data access, enabling access to detailed narratives rather than generalities.
The final sample size was set by the study’s aims to understand lived experiences and sensemaking around AI personalisation and governance, the diversity of decision-making roles, and the depth of experiential data. As the analysis continued, additional interviews were conducted until the dataset provided sufficient interpretive depth to clarify core experiential structures and socio-technical configurations and until new interviews largely reinforced existing interpretations without yielding significant new insights (Ahmad & Wilkins, 2024).
The combination of purposive sampling with explicit reporting guidelines enhances the rigour and transparency of research, supporting credible and relevant findings in both qualitative and quantitative marketing contexts (Alserhan et al., 2024; Chen et al., 2025; Komodromos et al., 2026; Memon et al., 2024). This methodology is particularly beneficial for exploring complex marketing dynamics where depth of understanding takes precedence over statistical representativeness.
Eligibility criteria require at least 10 years of experience in marketing administration, direct oversight of artificial intelligence deployments (such as recommendation engines, sentiment analysis tools, and generative semioticians), and senior leadership roles (including chief marketing officers, digital strategists, and insight ontologists). Demographic stratifications showed parity: eighteen female and eighteen male participants, aged 38 to 62 (mean age 48), affiliated with enterprises employing between 250 and 7500 personnel. Recruitment was conducted via LinkedIn Premium networks, European Marketing Academy consortia, and sectoral conferences (e.g., EFMA, ITB Berlin), resulting in an 82 per cent enrolment rate.
Saturation was attained after thirty-three interviews, as the emergence of new heterogeneities subsided. Data were collected through semi-structured interviews lasting approximately 50 to 75 min, utilising encrypted Zoom platforms during the third and fourth quarters of 2025. This procedure delineated participants’ digital ontologies while accounting for dispersion across Europe.
Sessions were audio-recorded with explicit consent and transcribed verbatim by human auditors (to avoid automated tools that could introduce metathematic biases), yielding 612 pages of primary semiotic material. The iterative protocol maintained a balance between thematic fidelity and dialogic punctuations, including initial delineations of role and AI actantries, as well as heterogeneities in personalisation (“Quomodo AI transfiguravit consumer engagement ontologias within your administrative networks?”); predictive exemplars (“Recount actantial stabilizations wherein analytics prognosticated campaign teleologies”); ethical/pragmatic aporias (“Quae interstices—juridical, epistemic, infrastructural—impede network purifications?”); administrative sequelae (“In what manner have machinic enrolments recalibrated departmental efficacies, stakeholder imbrications, and pecuniary reciprocities?”).
Expository probes explored the noematic strata (e.g., “Iterate a specific campaign’s actantial trajectory from inscription to stabilisation”), while field inscriptions documented paralinguistic semiotics and reflexive perturbations. Protocol piloting with six preliminary actants refined interrogative precisions, removing 15 per cent of initial puncta to ensure phenomenological fidelity.
The analytical ontogenesis followed a detailed six-phase thematic process using NVivo 15. It involved familiarisation with transcripts, inductive coding resulting in 712 provisional actants, and grouping these into categories refined through ANT-based translation. Evidence was verified, reducing the number of codes by 22% to address dataset issues. Precise nomenclature aligned with Latourian concepts, and reports focused on semiotics. Bicameral analysts ensured triangulation: the coding agreement was 91%, achieved through consensus in ANT’s calculation centres. Network diagrams mapped the evolution from problematization to mobilisation.
Methodological trustworthiness was embedded in multifaceted validations calibrated to interpretive ontics (Lim, 2024). Credibility was obtained through prolonged immersion (averaging three follow-up probes per participant), member reflections (thirty-two out of thirty-six affirmed interpretive summaries), and detailed semiotic stratigraphies (Bingham, 2023; Lim, 2024). Transferability was demonstrated through granularised contextual vignettes covering sectoral and juridical variations (Megheirkouni & Moir, 2023).
Dependability was ensured through comprehensive audit ontologies, including detailed codebooks, analytic memoranda, and decision trails (Lim, 2024). Confirmability was maintained via a principal investigator’s reflexivity ledger, which documented the contextual marketing situatedness (Megheirkouni & Moir, 2023): technophilic affinities tempered by ethical scepticism towards ‘black box’ inscrutabilities, transparently disseminated to balance power semiotics. Inherent limitations were openly acknowledged: autodescriptive idealisations risk overrationalisation; virtual modalities exclude embodied resonances; retrospective noematics bypassing real-time perturbations.
These strategies conform to recognised standards of qualitative research, which emphasise systematic documentation and researcher self-awareness to enhance trustworthiness (Lim, 2024). Limitations, including the potential for over-rationalisation due to autodescriptive idealisations, the omission of embodied experiences in virtual modalities, and retrospective interpretations that circumvent real-time dynamics, are openly acknowledged to provide a comprehensive view of the study’s scope and limitations (Christou, 2025).
Acknowledging constraints openly is essential for maintaining rigour and ensuring proper interpretation of results. Overall, integrating detailed audit ontologies and reflexivity practices strengthens the dependability and confirmability of qualitative research findings.
4. Findings: AI-Mediated Reconfiguration of European Marketing Administration
Participants’ narratives revealed four key themes illustrating AI’s profound integration into European marketing. Ambivalent human–AI co-agency describes how executives perceive AI systems as quasi-collegial actors that influence campaign, budget, and risk decisions, thereby shifting authority between humans and machines. Infrastructural and regulatory friction highlights how legacy data systems, platform integrations, GDPR, and EU AI regulations can either accelerate innovation or create bottlenecks, redirecting progress.
The theme of opacity and exclusion encompasses concerns that advanced models, fuelled by extensive datasets, may obscure biases, enable intrusive personalisation, and diminish consumer consent, even as they enhance targeting and revenue. It also demonstrates how senior marketers perceive their roles as translators, curators, and ethical guardians of AI systems, reshaping notions of expertise and responsibility as algorithms increasingly drive actions. These themes constitute a framework in which AI serves as a co-creator of strategy, governance, and authority, continually redefining relationships among executives, technologies, regulators, and consumers in European marketing networks.
4.1. Theme 1: Ambivalent Human–AI Co-Agency
Participants consistently described AI systems as “co-strategists” integrated into their daily decision-making processes, rather than as neutral tools. They explained that budget allocations, channel mixes, and creative iterations were increasingly determined in advance by recommendation scores and overnight uplift predictions generated by their platforms.
For example, tourism executives recounted instances in which yield management systems overrode longstanding seasonal heuristics by adjusting inventory based on micro-segment demand, with human teams subsequently rationalising these automated actions in presentations to boards and partners. Fintech respondents provided comparable evidence, describing how fraud-detection and risk-scoring models automatically prevented transactions or reclassified customer value tiers. This development compelled managers to intervene solely as “appeal handlers” when high-value clients raised concerns. Such a shift redefined AI’s role, establishing it as the primary entity in customer governance rather than merely a back-office instrument.
Across interviews, this co-agency was viewed as effective when forecasts matched actual outcomes but was deeply unsettling when models produced counterintuitive recommendations. Several executives recalled “override moments,” when they paused or reversed algorithm-driven campaigns in response to public backlash or brand-safety incidents. These accounts exemplify the ongoing negotiation of authority between human judgment and algorithmic agents.
4.2. Theme 2: Infrastructural and Regulatory Friction
The data indicated that each attempt to integrate artificial intelligence into marketing workflows exposed underlying weaknesses in organisational infrastructure and compliance frameworks. Executives described multi-year initiatives to unify CRM, CDP, and e-commerce data into a single training dataset, noting that minor discrepancies in consent indicators or legacy identifiers frequently caused model pipeline failures and prompted what one participant termed “data mutinies,” where entire personalisation layers had to be deactivated to prevent unlawful processing.
Several respondents from the financial and professional services sectors described episodes in which Data Protection Impact Assessments, mandated by the GDPR, delayed high-profile AI launches by several months. These delays resulted from privacy officers requiring redesigns of profiling logic, minimisation of input features, or the inclusion of human-in-the-loop checkpoints for high-risk decisions.
Additionally, some participants cited how the emerging EU AI Act is already shaping strategic plans; internal risk-mapping exercises have reclassified chatbots, scoring tools, and biometric pilots into different risk categories. This reclassification has led to the cancellation of emotion-recognition experiments and the redefinition of generative-content projects toward internal enablement rather than consumer-facing automation.
These accounts emphasise how juridical inscriptions—such as consent records, risk registers, and DPIAs—and technical artefacts—including APIs, dashboards, and log files—function as influential nonhuman actors that can impede, redirect, or constrain AI deployments. As such, infrastructural and regulatory friction is a fundamental aspect of the lived experience of AI-driven marketing.
4.3. Theme 3: Ethics of Opacity, Bias, and Exclusion
Participants’ accounts of ethics were grounded in specific episodes where artificial intelligence-driven practices appeared to exceed what was considered acceptable, despite being legally permissible. Fintech managers cited model validation reviews in which fairness assessments revealed that certain segments—often younger, migrant, or lower-income customers—were disproportionately flagged as risky or unprofitable. This prompted internal deliberations about whether to rebalance the models at the expense of predictive accuracy or to accept an outcome that was difficult to interpret but technically optimal.
Tourism and digital-media executives described A/B tests in which dynamic pricing engines increased room rates or subscription offers to levels customers later described as “exploitative,” particularly during peak demand or crisis events. This prompted teams to implement hard caps and ethical guardrails, even when systems forecasted higher short-term revenue. Across sectors, opacity was viewed as both a technical and relational challenge: interviewees reported difficulty explaining why a particular customer encountered a specific advertisement, offer, or denial. This was chiefly due to models relying on hundreds of features and third-party data sources, creating what some termed a “transparency gap” that threatened regulatory compliance and brand trust.
In response, many organisations established ethics committees, model-explanation dashboards, and “principles in practice” playbooks that expressly prohibit deceptive dark-pattern nudging or targeting of sensitive attributes. However, executives acknowledged that commercial key performance indicators (KPIs) and competitive pressures continually challenge these commitments, making ethical practice an ongoing, negotiated process rather than a fixed state.
4.4. Theme 4: Re-Writing Professional Identity and Expertise
Interview evidence indicated that the diffusion of AI is reshaping how senior marketers understand and execute their professional roles. Many participants described a transition from designing comprehensive campaigns to orchestrating ecosystems of data scientists, vendors, and compliance specialists. They now see themselves as “conductors of a socio-technical orchestra,” translating abstract business objectives into model requirements and, conversely, interpreting algorithmic outputs into narratives accessible to boards, regulators, and frontline teams.
Several executives shared examples of departments being restructured to include new roles such as AI product owners, marketing data leads, or responsible-AI champions. These roles are frequently filled by individuals with hybrid backgrounds that combine analytics expertise, legal acumen, and traditional brand knowledge. This approach institutionalises boundary-spanning expertise that bridges conventional marketing and technical domains. However, there was notable concern about the future of craft skills, as participants described instances in which junior staff became overly reliant on automated systems.
These dynamics support viewing artificial intelligence as both a catalyst for professional growth—through the development of new strategic, analytical, and ethical skills—and a risk of deprofessionalisation, if expertise is reduced to operating tools rather than to the broader interpretive and relational work that has traditionally underpinned marketing authority.
Overall, the findings portray artificial intelligence as a fundamentally integrated, co-agentive force that is reshaping European marketing administration, not merely optimising existing practices. Across sectors, executives view AI as a powerful yet ambivalent partner that enhances predictive capabilities and disrupts established hierarchies, shifting decision-making authority to algorithmic systems while leaving humans responsible for justification and accountability.
The four themes demonstrate that this transformation is significantly constrained by infrastructural weaknesses and dense regulatory regimes, which are further complicated by risks of opacity, bias, and exclusion. They also underscore the professional implications as marketers renegotiate definitions of expertise, ethics, and strategic judgement. Collectively, the findings indicate that the future of AI-mediated marketing will depend less on technical sophistication alone and more on how organisations manage human–AI collaboration, reinforce socio-technical infrastructures, and sustain reflective, ethically informed professional identities within evolving networks.
Table 1: Thematic Overview of AI-Mediated Reconfiguration in European Marketing Administration.
Table 1.
AI in Marketing—Thematic Analysis (Sectional Layout).
5. Recommendations
The study’s findings yield a set of interconnected recommendations to help European service organisations position artificial intelligence as a governed, ethical, and professionally sustainable collaborator in the development of marketing strategy development, rather than as an unregulated optimisation tool. These recommendations are organised into four domains: structuring human–AI co-agency, strengthening socio-technical and regulatory frameworks, institutionalising ethical governance, and redefining professional roles and capabilities.
5.1. Calibrate Human–AI Co-Agency
Organisations should explicitly delineate the domains in which artificial intelligence makes proposals, determines conditions, or where humans retain ultimate authority, rather than allowing de facto delegation to arise through routine or vendor defaults. This requires mapping each key marketing decision (e.g., pricing, targeting, content approval, risk assessment) and assigning decision ownership, with clear escalation procedures for cases in which algorithmic recommendations diverge from contextual judgment or brand values.
Regular review practices—such as monthly “model reflection” sessions—can foster reflexivity regarding co-agency by encouraging practitioners to examine where AI outputs seemed overly determinative or, conversely, under-utilised. Over time, these forums promote a shared organisational understanding of AI as an augmentative, rather than substitutive, actor and support more intentional negotiation of its role in campaign strategy and resource allocation.
Fortify Infrastructural and Regulatory Readiness
Given the ongoing friction reported by executives, organisations are advised to develop a dedicated data and infrastructure roadmap that precedes or runs concurrently with major AI deployments. This includes consolidating customer data into well-governed platforms, harmonising consent and preference records, and documenting dependencies among marketing technology components to prevent failures in a single node from affecting the entire network.
Simultaneously, marketing teams should collaboratively develop compliance-by-design protocols with legal and privacy experts, treating GDPR and the EU AI Act not merely as post-implementation checks but as foundational constraints that shape feature selection, model objectives, and explainability requirements from the outset. Practical steps include standardised DPIA templates for marketing AI, risk-stratified use-case classification, and automated documentation of critical model decisions to support future audits and regulatory or client inquiries.
5.2. Embed Ethical Guardrails and Transparency
The ethical tensions related to opacity, bias, and exclusion require the establishment of formal responsible-AI frameworks within marketing governance. Organisations are encouraged to form cross-functional AI ethics councils or working groups to review high-impact use cases, delineate “red-line” practices (such as emotion-based price discrimination and covert dark-pattern nudging), and set criteria for acceptable trade-offs between accuracy and fairness in segmentation or scoring models.
At the operational level, organisations should establish bias-testing and explainability routines proportional to the associated risks. In addition, they should implement transparency measures for consumers, including clear disclosures about AI use, accessible explanations of automated decisions, and straightforward channels for contesting or appealing adverse outcomes. These initiatives help close the perceived transparency gap among executives and enhance regulatory compliance and consumer trust in AI-mediated interactions.
5.3. Re-Design Roles, Skills, and Learning Pathways
To effectively oversee the reassessment of professional identities, organisations should intentionally develop hybrid roles and capability frameworks rather than allowing skill transitions to occur spontaneously. Suggested role archetypes include AI Product Owner for Marketing (responsible for the product lifecycle and value), Responsible-AI Steward (centred on ethics and compliance), and Marketing Data Lead (integrating analytics, insights, and branding teams).
Skill development should integrate technical literacy—understanding data pipelines, foundational model knowledge, and their limitations—with a strong grounding in consumer psychology, storytelling, and critical thinking. This approach aims to mitigate the risk of deskilling associated with over-reliance on technology. Continuous learning initiatives—such as internal academies, rotations between creative and data teams, and reflective sessions on AI cases—can foster a resilient, reflective professional culture in which practitioners see themselves as interpreters and custodians of AI, rather than mere operators of interfaces.
Table 2: AI-MARC Governance Framework for AI-Mediated Marketing Administration.
Table 2.
Operational Actions and Outcomes by Pillar.
6. Limitations of the Research
This study presents several limitations that must be considered when interpreting the findings. Firstly, the research methodology is qualitative and employs purposive, maximum variation sampling across four sectors; this approach emphasizes depth and conceptual insight over statistical representativeness. Consequently, the findings are analytically generalizable rather than probabilistically generalizable, and they may not fully encompass the range of AI implementation conditions across other industries or organisational contexts. Moreover, the evidence is derived from the perspectives of senior managers, who are instrumental in understanding strategy and governance; however, their experiences may differ from those of other stakeholder groups such as frontline marketing personnel, data science teams, agency partners, and customers. Finally, as is common in many interview-based studies, the data may be subject to limitations inherent in self-reporting, including recall bias and social desirability bias, especially when discussing sensitive topics such as performance impacts, ethical risks, and organizational preparedness.
A second limitation pertains to the timing and scope of the inquiry. The study is cross-sectional, capturing views and practices at a specific point in time, while AI tools, regulatory expectations, and competitive norms are evolving rapidly; consequently, some implementation challenges and best practices may shift as technologies and governance frameworks mature. The analysis also centres on perceived outcomes and managerial assessments rather than systematically linking implementation choices to objective performance indicators (e.g., campaign ROI, customer retention metrics, or experimental uplift measures), thereby restricting causal inference. Future research may address these limitations through longitudinal designs, mixed-method approaches that triangulate interviews with internal documents and performance data, and broader comparative sampling across additional sectors, geographies, and stakeholder roles to test the robustness and boundary conditions of the themes identified here.
7. Future Research Directions
This study elucidates the lived phenomenology of AI integration in European marketing administration through an ANT lens; however, several avenues merit further exploration to deepen both theoretical and practical insights. First, while purposive sampling effectively captured the perspectives of senior executives, future research should include consumer voices to trace relational dynamics across the full actor–network. This includes examining how end users perceive and resist AI-driven hyper-personalisation within the constraints of the GDPR. Additionally, longitudinal ethnographic studies could mitigate retrospective biases inherent in interview methodologies by capturing real-time network stabilisations, breakdowns, and ethical negotiations as the EU AI Act is implemented post-2026.
Second, extending research beyond service sectors to include manufacturing or Business-to-Business (B2B) contexts would test the generalisability of the identified themes, particularly infrastructural friction in supply-chain AI deployments. Quantitative validation methods—such as surveys assessing co-agency perceptions alongside performance metrics—may complement phenomenological insights and address calls for mixed-methods triangulation in AI ethics scholarship. Furthermore, comparative analyses across global regions (e.g., the United States versus the European Union) could reveal how divergent regulatory ecologies shape AI’s reconfiguration as an agent, thereby informing transatlantic governance discussions.
Finally, practitioner-led action research could operationalise the AI-MARC framework by evaluating its effectiveness in mitigating de-skilling and opacity risks through pre- and post-intervention analyses. The advancement of generative AI technologies requires research on multimodal actants, such as agentic workflows. In parallel, interdisciplinary collaborations with computer scientists could explore technical solutions to enhance the explainability of black-box models. These avenues aim to improve socio-technical insights, ensuring that AI contributes to, rather than undermines, the integrity of equitable marketing networks.
8. Conclusions
This study has elucidated the substantial reconfiguration of European marketing governance through the agential perspective of artificial intelligence, employing interpretive phenomenology and Actor–Network Theory to demonstrate that AI functions not merely as a passive instrument but as a co-constitutive actor within diverse socio-technical networks. Drawing on comprehensive narratives from thirty-six senior executives across the tourism, fintech, professional services, and digital media industries, four interconnected themes are identified: ambivalent human–AI co-agency, infrastructural and regulatory friction, ethics of opacity, bias, and exclusion, and the redefinition of professional identity and expertise. Collectively, these themes illustrate how AI shapes decision-making authority, operational infrastructures, normative boundaries, and occupational competencies, positioning it as a powerful collaborator that enhances predictive capabilities while simultaneously challenging established hierarchies and ethical boundaries.
Executives’ accounts underscore a crucial tension: while artificial intelligence can anticipate budgets, supersede heuristics, and streamline governance to deliver efficiency gains, it also requires ongoing human negotiation within the framework of GDPR-mandated Data Protection Impact Assessments (DPIAs), risk classifications under the EU AI Act, and enduring transparency deficits that risk perpetuating biases against vulnerable consumer groups. Professional roles are shifting from campaign strategists to socio-technical coordinators—AI product managers and ethics custodians—who must cultivate hybrid skill sets to manage these networks responsibly, mitigating de-skilling risks through intentional upskilling and reflective practices.
The AI-MARC framework (Agency, Infrastructure, Responsibility, Capability) synthesises these insights into actionable governance pillars, advocating calibrated co-agency, robust data ecosystems, embedded ethical guardrails, and adaptive professional capabilities to harness AI’s transformative potential equitably. Ultimately, sustainable AI-mediated marketing depends on organisations’ ability to align technological agency with human judgment, regulatory requirements, and ethical principles, fostering resilient networks where innovation is balanced with fairness and accountability. This research addresses key gaps in qualitative explorations of AI’s lived integration and urges future scholarship to examine longitudinal dynamics and consumer perspectives amid evolving regulatory landscapes.
Funding
This research received no external funding.
Institutional Review Board Statement
The research was conducted through voluntary, qualitative interviews with adult professionals, focusing on their professional experiences and opinions in a non-sensitive context. No vulnerable populations were involved, and no personal, medical, or sensitive data were collected. All participants provided informed consent, anonymity was ensured, and data were handled in accordance with relevant data protection regulations. The study was conducted in full compliance with the ethical principles outlined in the Declaration of Helsinki (1975, revised 2013), particularly with respect to respect for persons, confidentiality, and voluntary participation. Based on the nature, scope, and methodology of the research, and in line with institutional practice, formal ethics committee or IRB approval was not required.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Abrokwah-Larbi, K., & Awuku-Larbi, Y. (2023). The impact of artificial intelligence in marketing on the performance of business organizations: Evidence from SMEs in an emerging economy. Journal of Entrepreneurship in Emerging Economies, 16(4), 1090–1117. [Google Scholar] [CrossRef]
- Ahmad, M., & Wilkins, S. (2024). Purposive sampling in qualitative research: A framework for the entire journey. Quality & Quantity, 59, 1461–1479. [Google Scholar] [CrossRef]
- Alijoyo, F., Aziz, T., Omer, N., Yusuf, N., Kumar, M., Ramesh, A., Ulmas, Z., & El-Ebiary, Y. (2025). Personalized marketing: Leveraging AI for culturally aware segmentation and targeting. Alexandria Engineering Journal, 119, 8–21. [Google Scholar] [CrossRef]
- Almuqhim, S., & Berri, J. (2025). AI-driven personalized microlearning framework for enhanced E-learning. Computer Applications in Engineering Education, 33, e70040. [Google Scholar] [CrossRef]
- Alserhan, A. B., Sumadi, A. M., Hadman, A., & Komodromos, M. (2024). Entrepreneurial networks and their impact on entrepreneurship intentions & PF as mediators. Special Issue of the Journal for Global Business Advancement (JGBA), 16(3), 317–342. [Google Scholar] [CrossRef]
- Ansah, E. (2025). Exploring sustainable AI efficient, ethics and future marketing practices. International Journal of Innovative Science and Research Technology, 10, 1845–1860. [Google Scholar] [CrossRef]
- A’yun, A., & Setyaningsih, W. (2025). Consumer empowerment through ethical AI: Strategies for transparent and trustworthy personalized marketing. Journal of Marketing Breakthroughs, 1(1), 1–12. [Google Scholar] [CrossRef]
- Bareis, J. (2024). The trustification of AI. Disclosing the bridging pillars that tie trust and AI together. Big Data & Society, 11(2), 20539517241249430. [Google Scholar] [CrossRef]
- Bingham, A. (2023). From data management to actionable findings: A five-phase process of qualitative data analysis. International Journal of Qualitative Methods, 22, 16094069231183620. [Google Scholar] [CrossRef]
- Bulchand-Gidumal, J., Secin, E., O’Connor, P., & Buhalis, D. (2023). Artificial intelligence’s impact on hospitality and tourism marketing: Exploring key themes and addressing challenges. Current Issues in Tourism, 27, 2345–2362. [Google Scholar] [CrossRef]
- Cagiza, C., Faustino, M., Cagiza, I., & Cajiza, A. (2025). AI-powered advisory platforms for sustainable marketing innovation in SMEs: Empirical evidence from underserved U.S. markets. Sustainability, 17, 9336. [Google Scholar] [CrossRef]
- Chatzichristos, G. (2025). Qualitative research in the era of AI: A return to positivism or a new paradigm? International Journal of Qualitative Methods, 24, 16094069251337583. [Google Scholar] [CrossRef]
- Chen, Y., Duan, Y., Yu, Y., & Guo, H. (2025). Reasons for frail older adults in nursing homes declining participation in exercise interventions: A life course perspective qualitative study. Journal of Advanced Nursing. [Google Scholar] [CrossRef]
- Christou, P. (2025). Reliability and validity in qualitative research revisited and the role of AI. The Qualitative Report, 30(3), 3306–3314. [Google Scholar] [CrossRef]
- Condé, L., & Münch, C. (2025). Resilient by design: Exploring the social abilities and actor-network roles of artificial intelligence in supply chain management. Journal of Business Logistics, 46, e70032. [Google Scholar] [CrossRef]
- Contini, F., Onţanu, E., & Velicogna, M. (2024). AI accountability in judicial proceedings: An actor–network approach. Laws, 13, 71. [Google Scholar] [CrossRef]
- Drozdowska, B., Cristall, N., Fladt, J., Jaroenngarmsamer, T., Sehgal, A., McDonough, R., Goyal, M., & Ganesh, A. (2025). Attitudes and perceptions regarding knowledge translation and community engagement in medical research: The PERSPECT qualitative study. Health Research Policy and Systems, 23, 1–13. [Google Scholar] [CrossRef]
- Egger, R., & Yu, J. (2025). The impact of real-time hyper-personalisation in AI-generated tourism images. Journal of Hospitality and Tourism Technology. [Google Scholar] [CrossRef]
- Gabelaia, I., & Hendieh, J. (2025). The transformative power of AI and its impact on business strategy, financial operations, and marketing decision-making: A case study method. International Journal of Innovation Science. [Google Scholar] [CrossRef]
- Grewal, D., Satornino, C. B., Davenport, T., & Guha, A. (2024). How generative AI is shaping the future of marketing. Journal of the Academy of Marketing Science, 53, 702–722. [Google Scholar] [CrossRef]
- Hardcastle, K., Vorster, L., & Brown, D. (2025). Understanding customer responses to AI-driven personalized journeys: Impacts on the customer experience. Journal of Advertising, 54, 176–195. [Google Scholar] [CrossRef]
- Komodromos, M., Anastasiadou, S., & Seremeti, L. (2026). Transforming corporate sustainability: Integrating advanced analytics and business intelligence into ESG strategy implementation. In M. Komodromos, L. Seremeti, L. Anastasiadis, P. Liargovas, & S. Anastasiadou (Eds.), Data-driven ESG strategy implementation through business intelligence (pp. 1–36). IGI Global Scientific Publishing. [Google Scholar] [CrossRef]
- Kshetri, N., Dwivedi, Y., Davenport, T., & Panteli, N. (2023). Generative artificial intelligence in marketing: Applications, opportunities, challenges, and research agenda. International Journal of Information Management, 75, 102716. [Google Scholar] [CrossRef]
- Kumar, L., & Devi, A. (2024). Beyond likes and shares: Unveiling the sequential mediation of brand equity, loyalty, image, and awareness in social media marketing’s influence on repurchase intentions for high-tech products. Qubahan Academic Journal, 4, 23–37. [Google Scholar] [CrossRef]
- Kumar, V., Ashraf, A., & Nadeem, W. (2024). AI-powered marketing: What, where, and how? International Journal of Information Management, 77, 102783. [Google Scholar] [CrossRef]
- Larsson, I., Siira, E., Nygren, J., Petersson, L., Svedberg, P., Nilsen, P., & Neher, M. (2025). Integrating AI-based triage in primary care: A qualitative study of Swedish healthcare professionals’ experiences applying normalization process theory. BMC Primary Care, 26, 340. [Google Scholar] [CrossRef] [PubMed]
- LeBrun, A. (2025). The risks of AI-generated, hyper-personalized digital advertisements. Philosophy & Technology, 38, 99. [Google Scholar] [CrossRef]
- Lim, W. (2024). What is qualitative research? An overview and guidelines. Australasian Marketing Journal, 33, 199–229. [Google Scholar] [CrossRef]
- Lozano-Paredes, L. (2025). Mapping AI’s role in NSW governance: A socio-technical analysis of GenAI integration. Frontiers in Political Science, 7, 1595345. [Google Scholar] [CrossRef]
- Magableh, I., Mahrouq, M., Ta’Amnha, M., & Riyadh, H. (2024). The role of marketing artificial intelligence in enhancing sustainable financial performance of medium-sized enterprises through customer engagement and data-driven decision-making. Sustainability, 16, 11279. [Google Scholar] [CrossRef]
- Megheirkouni, M., & Moir, J. (2023). Simple but effective criteria: Rethinking excellent qualitative research. The Qualitative Report, 28, 848–864. [Google Scholar] [CrossRef]
- Memon, M., Thurasamy, R., Ting, H., & Cheah, J. (2024). Purposive sampling: A review and guidelines for quantitative research. Journal of Applied Structural Equation Modeling, 9, 1–23. [Google Scholar] [CrossRef] [PubMed]
- Mendoza, J., Franco, H., & Torres, O. (2025). Artificial intelligence in digital marketing: Quantitative analysis of its impact on customer personalization. Journal of Posthumanism, 5, 4206–4218. [Google Scholar] [CrossRef]
- Mikalef, P., Islam, N., Parida, V., Singh, H., & Altwaijry, N. (2023). Artificial intelligence (AI) competencies for organizational performance: A B2B marketing capabilities perspective. Journal of Business Research, 164, 113998. [Google Scholar] [CrossRef]
- Milovanović, M., & Novaković, V. (2025). The strategic integration of artificial intelligence in marketing: Predictive analytics and personalization—The case of Mercedes-Benz. EMC Review—Časopis za ekonomiju—APEIRON, XV(V), 144–156. [Google Scholar] [CrossRef]
- Naz, H., & Kashif, M. (2024). Artificial intelligence and predictive marketing: An ethical framework from managers’ perspective. Spanish Journal of Marketing—ESIC, 29, 22–45. [Google Scholar] [CrossRef]
- Njiru, D., Mugo, D., & Musyoka, F. (2025). Ethical considerations in AI-based user profiling for knowledge management: A critical review. Telematics and Informatics Reports, 18, 100205. [Google Scholar] [CrossRef]
- Nopas, D. (2025). Decentring the human: Actor-network theory and AI in Thai online learning communities. Asian Education and Development Studies, 14, 1052–1067. [Google Scholar] [CrossRef]
- Ominyi, J., Nwedu, U., Agom, D., & Eze, U. (2025). Leading evidence-based practice: Nurse managers’ strategies for knowledge utilisation in acute care settings. BMC Nursing, 24, 252. [Google Scholar] [CrossRef]
- Ortega-Bolaños, R., Bernal-Salcedo, J., Ortiz, M., Sarmiento, J., Ruz, G., & Tabares-Soto, R. (2024). Applying the ethics of AI: A systematic review of tools for developing and assessing AI-based systems. Artificial Intelligence Review, 57, 110. [Google Scholar] [CrossRef]
- Paschen, J., Kietzmann, J., & Kietzmann, T. (2019). Artificial intelligence (AI) and its implications for market knowledge in B2B marketing. Journal of Business & Industrial Marketing, 34, 1410–1419. [Google Scholar] [CrossRef]
- Peter, R., Roshith, V., Lawrence, S., Mona, A., Narayanan, K., & Yusaira, F. (2025). Gen AI—Gen Z: Understanding Gen Z’s emotional responses and brand experiences with Gen AI-driven, hyper-personalized advertising. Frontiers in Communication, 10, 1554551. [Google Scholar] [CrossRef]
- Pham, T. (2025). Ethical and legal considerations in healthcare AI: Innovation and policy for safe and fair use. Royal Society Open Science, 12, 241873. [Google Scholar] [CrossRef] [PubMed]
- Priyadharshini, D., Muthuvel, I., Saraswathy, S., Kavitha, P., & Jegadeeswari, V. (2025). Precision to plate: AI-driven innovations in fermentation and hyper-personalized diets. Frontiers in Nutrition, 12, 1659511. [Google Scholar] [CrossRef]
- Radanliev, P. (2025). Privacy, ethics, transparency, and accountability in AI systems for wearable devices. Frontiers in Digital Health, 7, 1431246. [Google Scholar] [CrossRef]
- Reed, C., Wynn, M., & Bown, R. (2025). Artificial intelligence in digital marketing: Towards an analytical framework for revealing and mitigating bias. Big Data and Cognitive Computing, 9, 40. [Google Scholar] [CrossRef]
- Rusthollkarhu, S., Toukola, S., Aarikka-Stenroos, L., & Mahlamäki, T. (2022). Managing B2B customer journeys in digital era: Four management activities with artificial intelligence-empowered tools. Industrial Marketing Management, 104, 241–257. [Google Scholar] [CrossRef]
- Sambakiu, O., Kujore, V., Adebayo, A., & Segbenu, B. (2025). Transformative role of generative AI in marketing content creation and brand engagement strategies. GSC Advanced Research and Reviews, 23, 1–11. [Google Scholar] [CrossRef]
- Schneider-Kamp, A., Franco, P., Bajde, D., & Nøjgaard, M. (2024). (Dis)entangling actor-network theory and assemblage theory in consumer and marketing scholarship: A review and future directions. Journal of Marketing Management, 40, 1634–1665. [Google Scholar] [CrossRef]
- Seremeti, L., Anastasiadis, L., & Komodromos, M. (2026). ESG strategy and business intelligence semantic network. In M. Komodromos, L. Seremeti, L. Anastasiadis, P. Liargovas, & S. Anastasiadou (Eds.), Data-driven ESG strategy implementation through business intelligence (pp. 37–48). IGI Global Scientific Publishing. [Google Scholar] [CrossRef]
- Shemshaki, M. (2024). Exploring the ethical implications of AI-powered personalization in digital marketing. Data Intelligence, 7, 1035–1084. [Google Scholar] [CrossRef]
- Song, Y., Yan, T., Jia, F., Chen, L., & Li, H. (2025). Developing generative AI for value co-creation: An intervention-based randomized field experiment in a healthcare context. Journal of Operations Management. [Google Scholar] [CrossRef]
- Tao, M., Li, X., Alam, F., Yan, Y., & Chau, T. (2025). Unveiling the impact of AI technological anxiety on the marketers’ intention to adopt generative AI. Journal of Global Information Management, 33, 1–22. [Google Scholar] [CrossRef]
- Vijayakumar, S., & Panwale, S. (2025). Evaluating AI-personalized learning interventions in distance education. The International Review of Research in Open and Distributed Learning, 26, 157–174. [Google Scholar] [CrossRef]
- Wang, S., & Zhang, H. (2025). Generative AI in international hotel marketing: Impacts on employee creativity and performance. International Journal of Contemporary Hospitality Management, 37, 2601–2626. [Google Scholar] [CrossRef]
- Wasilewski, A., Chawla, Y., & Prałat, E. (2025). Enhanced E-commerce personalization through AI-powered content generation tools. IEEE Access, 13, 48083–48095. [Google Scholar] [CrossRef]
- Yang, J., Blount, Y., & Amrollahi, A. (2024). Artificial intelligence adoption in a professional service industry: A multiple case study. Technological Forecasting and Social Change, 201, 123251. [Google Scholar] [CrossRef]
- Zheng, Z., Tan, Q., Zheng, X., & Yang, Y. (2025). The dark side of AI in insurance: A systematic review of mechanisms linking AI design features to consumer harm. Journal of Consumer Affairs, 59, e70034. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.