Abstract
Research on big data analytics (BDA) and supply chains often inventories “capabilities” but rarely explains how firms progress through adoption—or how governance over data and related resources shapes resilience outcomes. Drawing on 16 semi-structured interviews with senior managers in the manufacturing sector, we analyze organizational practices around data, analytics, and decision-making and synthesize a governed-adoption process framework. The framework specifies how five governance levers—ownership, standards, stewardship, access, lineage—operate differentially across four adoption gates (data plumbing—descriptive monitoring—predictive alerting—prescriptive decisioning). To move beyond staged descriptions, we make the underlying generative mechanisms explicit—Comparability, Explainability, Authorization, Fidelity, Executability—and link them to dynamic-capability micro foundations (sensing, seizing, reconfiguring) via decision-latency outcomes (“resilience timers”: Time-to-Detect, Time-to-Decide, Time-to-Reconfigure, Time-to-Recover). Brief deviant-case contrasts (e.g., notification without action; dashboards without owners) clarify boundary conditions under which governance enables or impedes resilient action. We also state concise, testable propositions (e.g., standards+lineage as a necessary condition for improving Time-to-Detect; ownership+access as necessary for improving Time-to-Decide) and provide gate exit-criteria to support evaluation and future comparative tests. Claims are bounded to analytic generalization from a single-country, manufacturing-sector qualitative sample; we make no assertion of statistical validation. Practically, the framework prioritizes governance work ahead of tool spend, helping organizations convert dashboards into repeatable decisions at speed.
1. Introduction
In the era of Industry 4.0, manufacturing and supply chain systems are undergoing a fundamental shift driven by the exponential growth in data and advances in digital technologies. This transformation—often described as a transition from the “Age of IT” to the “Age of Data” [1,2]—has been enabled by the widespread adoption of the Internet of Things (IoT), Artificial Intelligence (AI), and Big Data Analytics (BDA) [3]. Together, these technologies promote automation, interoperability, and real-time decision-making, laying the foundation for smarter, more adaptive, and responsive manufacturing systems.
Among these technologies, BDA has emerged as a strategic enabler of digital transformation by empowering firms to derive actionable insights from high-volume, high-velocity, and high-variety data. BDA entails advanced analytical techniques that uncover patterns, predict trends, and support evidence-based decision-making [4,5,6]. In manufacturing, established applications include production scheduling, quality monitoring, demand forecasting, inventory management, and logistics optimization—capabilities that enhance operational visibility and responsiveness [7,8]. For supply chains, these gains extend beyond performance; analytics can strengthen the capacity to anticipate, absorb, and recover from disruptions, thus contributing to supply chain resilience (SCRes) [9,10].
SCRes—defined as the capability to anticipate, absorb, and recover from disruptions—has become a strategic necessity amid geopolitical instability, pandemics, and volatile demand patterns [9,11]. BDA plays a critical role in strengthening SCRes by empowering firms with predictive and prescriptive analytics that support early risk detection, disruption simulation, and contingency planning [10,12]. Several studies confirm that BDA improves risk visibility, enhances collaboration, and increases adaptive capacity, thereby supporting the development of resilient and agile supply networks [13,14,15].
Research Gap and Study Focus
Despite clear evidence that BDA supports both performance and resilience, the adoption of BDA technology in manufacturing supply chains remains challenging and uneven. It is important to distinguish adoption from the longer-term development of an analytics capability. Whereas capability denotes a mature, embedded competence, adoption refers to the initiation, implementation, and embedding of BDA tools and practices in day-to-day operations [16]. A careful review of the literature on BDA adoption in supply chain contexts reveals three persistent gaps.
Gap 1: Resource-centric adoption is under-specified. Digital transformation is fundamentally not about technology but about strategy [17]. Effective adoption requires mobilizing human expertise, technological infrastructure, leadership commitment, and organizational learning and integrating these into everyday operations [18,19,20]. But much organizational-resource work is implicitly framed around capability development rather than the resources and governance arrangements required to adopt BDA, Although several studies examine organizational and technological readiness in general business or IT settings [21,22,23,24], they rarely specify the particular resources needed to adopt BDA for resilient manufacturing supply chains. Recent work on readiness [25,26] similarly overlooks the distinct resilience demands placed on analytics. While reviews consolidate positive links between BDA and supply chain capabilities/performance, they—like most survey work—do not explain how resources are mobilized or how adoption becomes embedded in daily operations [27,28,29].
Gap 2: The adoption–resilience connection lacks process depth. Many studies highlight benefits such as agility, visibility, sustainability, and environmental performance [30,31,32,33], and sustainability-oriented reviews link BDA adoption to triple-bottom-line outcomes in manufacturing [29]. Yet few explain how specific adoption choices generate concrete resilience mechanisms. Although barrier mappings enumerate security/privacy, infrastructure, skills, standards/regulation, ownership/interoperability, and use-case clarity, we still lack longitudinal accounts of how firms overcome these interdependent obstacles to achieve resilience outcomes [34]. Recent manufacturing-focused studies [29,30,34,35] prioritize operations, performance, decision-making, and sustainability (as shown in Table 1), but they seldom explicitly assess BDA adoption in a resilience context. Broader Industry 4.0 syntheses integrate cross-technology drivers and outcomes but offer limited insight into the organizational processes through which firms adopt BDA for resilient operations [27,29,36].
Table 1.
Recent literature on BDA adoption in a supply chain context.
Gap 3: Context sensitivity and a shortage of manufacturing-specific qualitative evidence. Much adoption research uses generic, SME, or mixed-sector samples (often TOE-based surveys) that do not address the distinctive integration demands, partner interdependencies, and disruption contexts of manufacturing supply chains [2,23,31,33,37]. Results vary across countries and sectors; expected enablers are sometimes insignificant, moderators fail to hold, and institutional levers are especially salient for SMEs [32,35,38]. These inconsistencies suggest there are measurement/model limits and hidden process factors (governance, cross-functional coordination, and partner data readiness) that surveys and meta-analyses cannot identify. While some qualitative progress exists [40], this work typically falls short of tracing the full resource–barrier–benefit chain to resilience outcomes. Similarly, integrative reviews on manufacturing big data ecosystems and IoT emphasize technology stacks and performance benefits but devote less attention to how adoption processes deliver resilience [41].
Responding to these gaps, this study examines BDA technology adoption—rather than capability building—in manufacturing supply chains. We focus on (i) the organizational resources firms must assemble, (ii) the adoption challenges they confront, and (iii) the resilience benefits realized once tools are embedded in work routines. This addresses calls for context-rich, process-level explanations of human, governance, and cross-tier adoption processes that enable resilience [36,42]. A qualitative design is appropriate and timely; dominant evidence on antecedents remains quantitative and cross-industry [23,31,43], whereas manufacturing case research shows that barrier structures are situated and interdependent, with governance and leadership exerting strong driving power [44,45]. By clarifying how manufacturing firms mobilize resources, navigate adoption barriers, and convert BDA use into visibility, flexibility, and faster recovery, this study complements capability-centric models [46,47] and offers actionable guidance for resilient operations under Industry 4.0. Accordingly, the study is guided by the following research questions:
- RQ1: “What organizational resources—and governance arrangements—are critical to adopting and embedding BDA in manufacturing supply chains to strengthen resilience?”
- RQ2: “What challenges do manufacturing firms encounter when adopting BDA for resilient supply chain operations?”
- RQ3: “What resilience benefits—in terms of decision-latency timers (Time-to-Detect/ Decide/Reconfigure/Recover)—do manufacturing supply chains realize following BDA adoption?”
Study Contributions. This paper makes three contributions. First, we shift attention from steady-state “capabilities” to the adoption process and theorize a governed-adoption lens in which a data-governance overlay—specified by the following five handles: ownership, standards, ostewardship, access, and lineage—coordinates otherwise ordinary resources (infrastructure, hybrid human capital, leadership, and partner interfaces) into use-in-practice within manufacturing supply chains. Second, we map adoption as a staged sequence of gates (data plumbing → descriptive monitoring → predictive alerting → prescriptive decisioning) and show how progressing through these gates activates the microfoundations of dynamic capabilities—sensing, seizing, and reconfiguring—to produce resilience lifecycle outcomes (prediction, response, recovery). We visualize this mechanism and return to it in the Discussion section to interpret cross-case patterns. Third, we identify the boundary conditions that moderate these effects—(i) firm size/digital maturity (with SMEs facing sharper Opex and talent-retention constraints) and (ii) supply-chain tier position/power asymmetry (which shapes access to downstream demand signals and data-sharing terms)—and derive governance implications for inter-organizational information processing.
Building on these contributions, we make explicit the five gate-conditional generative mechanisms—Comparability, Explainability, Authorization, Fidelity, and Executability—that operate across the four adoption gates and link them to dynamic-capability microfoundations via observable decision-latency outcomes (“resilience timers”: Time-to-Detect, Time-to-Decide, Time-to-Reconfigure, and Time-to-Recover). Our account specifies (i) which mechanism “fires” at which gate, (ii) the associated timer (TTD/TtD/TtRcf/TtR) expected to improve, and (iii) a set of testable propositions. This converts the link to dynamic capabilities from a descriptive mapping into a mechanism-based, falsifiable process theory.
The remainder of the paper is organized as follows. Section 2 introduces the core concepts of BDA and SCRes, emphasizing the role of BDA adoption in resilient supply chain operations. Section 3 details the research methodology, including design, data collection, and analysis procedures. Section 4 presents the findings on resources, challenges, and benefits in manufacturing settings. Section 5 represent the BDA adoption framework with resource and data governance overlay. Section 6 discusses theoretical and practical implications, outlines directions for future research, and notes the limitations.
2. Literature Review
2.1. Big Data Analytics
Big Data refers to vast, complex, and high-velocity datasets—often unstructured or semi-structured—that exceed the capacity of traditional data processing systems. These datasets are commonly described by the following “5Vs”: volume, velocity, variety, veracity, and value [48,49]. In supply chain management, the effective utilization of Big Data can substantially reduce response time and computational costs, particularly in volatile, data-intensive environments [50]. In parallel, the rapid growth of the “digital universe” underscores the challenge of low value density unless data are curated and analyzed; only a fraction of generated data yields value without governance and analytics [51].
Big Data Analytics (BDA) encompasses the methods, architectures, and technologies used to collect, process, integrate, and analyze such data to generate actionable insights and support strategic decision-making [19,52]. Drawing on the taxonomy of descriptive, predictive, and prescriptive analytics [53], BDA supports both retrospective performance evaluation and also enables forward-looking predictions and optimization-based recommendations (see Table 2). In supply chains, BDA has helped to address longstanding challenges—data sparsity, data quality issues, and “low value density”—particularly when combined with multi-criteria decision-making (MCDM) models [54]. Recent reviews also document BDA’s role in logistics/operations across forecasting, routing, inventory, and end-to-end visibility [55,56,57,58].
Table 2.
Taxonomy of Big Data Analytics (BDA).
Over the last decade, BDA has evolved from an operational tool to a strategic imperative. It enables real-time monitoring, predictive modeling, risk sensing, and adaptive decision-making—capabilities essential for competing under uncertainty [51,61,62]. Moreover, the rise of cloud computing, edge analytics, and AI-based automation has significantly amplified the real-time value extraction from big data ecosystems [63,64]. In supply chains, BDA is known to enhance responsiveness, reduce operational costs, improve demand forecasting, and enable end-to-end visibility [55,65]. Firms have accordingly increased investment in data and analytics initiatives, with executive surveys consistently reporting enterprise-wide prioritization of data/AI programs [5].
BDA also strengthens collaboration by improving data transparency and interoperability across tiers [66,67]. Firms implementing integrated analytics platforms report higher supply chain integration and performance, suggesting BDA is not merely a technology layer but a catalyst for strategic agility and sustained competitiveness in digitally driven markets [68,69]. Consistent with our study’s focus, we distinguish BDA adoption—the initiation, implementation, and embedding of concrete tools, data plumbing, and decision routines—from BDA capability, a more mature, routinized organizational competence. Much of the extant literature examines capabilities and their correlates; by contrast, we attend to adoption processes and governance choices that make analytics actionable in day-to-day supply chain work. This distinction sets up our later analysis of how governed adoption sequences translate into resilience outcomes.
2.2. Supply Chain Resilience
Supply chain resilience (SCRes) is commonly defined as a supply chain’s capacity to anticipate, absorb, adapt to, and recover from disruptions while maintaining—or rapidly restoring—an acceptable level of performance [9,70]. Building on this definition, contemporary work frames the resilience cycle as a process with interlinked stages of preparation/prediction, response, and recovery (as shown in Figure 1), often extended to include post-event learning and adaptation that reshape routines for future shocks [71,72,73]. In practice, organizations combine proactive measures (e.g., redundancy, buffers, risk mapping, and scenario planning), reactive measures (e.g., crisis response and expedited logistics), and hybrid approaches that balance efficiency with flexibility; classic contributions contrast redundancy with flexibility, arguing that the latter can generate day-to-day benefits alongside shock absorption [74,75]. At the network level, resilience also depends on how disturbances propagate—the “ripple effect”—and on the speed and coordination with which firms reconfigure flows and capacities across the supply base [76,77]. Quantification typically relies on trajectory-based metrics such as the resilience “triangle” (area of performance loss over time) and operational diagnostics like Time-to-Recover (TTR) and Time-to-Survive (TTS), which help prioritize mitigation investments and recovery planning [78,79,80]. Together, these perspectives position SCRes as a dynamic capability grounded in sensing, seizing, and reconfiguring resources under turbulence [81].
Figure 1.
Three-stage resilience lifecycle. Source: [71,82].
Empirical evidence from recent crises further clarifies mechanisms and contexts. During COVID-19, firms with stronger visibility, coordination, and digital readiness were better able to sustain operations and shorten recovery horizons, reinforcing visibility- and flexibility-based pathways to resilience [83,84,85]. Comparative studies also highlight that resilience is multi-mechanistic—visibility supports early warning and rapid response; flexibility enables the reconfiguration of sourcing, production, and distribution; and collaboration underpins synchronized recovery across tiers—while the salience of each pathway varies by disruption type and industry [72]. These insights complement design-oriented guidance (e.g., dual sourcing, postponement, and decoupling points) by emphasizing the organizational routines and inter-organizational coordination that translate preparedness into operational continuity [74,75].
Across recent studies, the following three mechanisms dominate the link from analytics to resilience: visibility (shared, near–real-time network state), flexibility (rapid reconfiguration of sourcing, production, and logistics), and collaboration (coordinated, data-driven responses). During COVID-19, visibility/control-tower architectures and agile reprioritization were especially salient, whereas in trade disputes or localized shocks, supplier-risk sensing and flexible routing were featured more prominently [85,86]. This variation by disruption type underscores the need to study adoption sequences—how data plumbing, governance, and partner interfaces are staged—rather than only correlating high-level “capabilities” with outcomes.
2.3. BDA Adoption for Resilient Supply Chains
BDA adoption for resilient supply chains is best understood as a process that moves from motivation and scoping, through piloting and integration, to routinization and scaling—each stage requiring distinct decisions about resources, governance, and partner coordination. Classic IT-assimilation work describes these steps as initiation, adoption, adaptation, acceptance, routinization, and infusion [87]. Recent qualitative research in supply chain planning shows that, in practice, firms “orchestrate” people, data, and technology across these stages as follows: they select resilience-relevant use cases (e.g., risk sensing, end-to-end visibility), secure leadership sponsorship, assemble cross-functional teams, and sequence pilots to surface dependencies before wider rollout [25]. This adoption lens differs from (and complements) capability work by emphasizing the concrete choices that make analytics stick—what data must be shared, which platforms are interoperable with legacy systems, how decision rights are allocated, and how new routines are embedded in control towers and S&OP cycles—so that resilience mechanisms materialize in day-to-day operations.
Across the early stages, three “adoption workstreams” commonly run in parallel. First, firms undertake data work—establishing decision rights and stewardship (who owns what data and who decides on quality, access, and lifecycle), setting quality standards, and stitching together heterogeneous systems inside the firm and across partners [88,89]. Without this governance–quality–integration foundation, analytics underperforms and, in supply chains specifically, poor data quality can derail disruption sensing and response [90]. Second, firms choose and configure analytics tooling—often starting with descriptive dashboards for visibility, then adding predictive models for early warning and prescriptive optimization/simulation for mitigation and recovery planning [40]. Third, firms redesign decision routines as follows: who monitors which signals, when playbooks trigger, how scenario results flow into expedited sourcing, re-routing, or dynamic inventory positioning. Process changes of this kind—codifying “when–who–how” rules—are what convert pilots into resilient operations and are repeatedly highlighted in pandemic and post-pandemic case evidence [91].
Adoption in manufacturing supply chains is also constrained (and enabled) by inter-organizational arrangements. Even when firms can technically connect data, willingness to share is shaped by data ownership, intellectual-property concerns, competitive sensitivity, and regulatory regimes. Recent work shows that inter-organizational data governance and emerging “data spaces” (sovereign, standards-based environments) are becoming pivotal for scaling cross-firm analytics without forfeiting control [92,93,94]. These constraints help explain persistent barrier “cascades” observed in manufacturing adoption studies—legacy integration hurdles, skill shortages, unclear use-case value, privacy/security concerns, and lack of standards—that must be tackled in a deliberate sequence rather than as isolated issues [44,45]. In other words, resilience outcomes depend not only on buying tools but on how firms structure data rights with suppliers, set minimum quality thresholds, and agree on alert/response protocols across tiers.
What do resilient outcomes look like once adoption progresses? Synthesis across recent studies suggests the following three recurring mechanisms: (i) visibility (shared, near-real-time state of the network), (ii) flexibility (faster reconfiguration of sourcing, production, logistics), and (iii) collaboration (coordinated responses and joint decision-making). Large-sample analyses show resilience improvements mediated by visibility and flexibility, but they are largely agnostic about the adoption micro-steps that create those mediators [47]. Case work during COVID-19, by contrast, traces how firms that had already adopted (and embedded) analytics—control-tower visibility, exception playbooks, and rapid scenarioing—sustained operations and shortened recovery horizons [91]. Outside pandemic contexts, similar adoption choices enable rapid response to trade shocks and natural hazards, underlining that the mechanisms generalize beyond a single crisis [40]. Importantly, the quantitative predominance of survey/SEM designs in the literature pinpoints correlations but is less able to recover these processual adoption dynamics—sequencing, path dependence, and governance choices—which is why multiple reviews explicitly call for qualitative, context-rich research on how adoption decisions translate into resilience on the factory floor and across partner networks [42,64].
Finally, framing BDA for resilience as an adoption problem clarifies managerial levers. Early scoping should prioritize resilience-salient use cases (e.g., multi-tier visibility, supplier risk sensing, and dynamic rerouting), link them to specific data contracts and stewardship roles, and select platforms proven to interoperate with legacy MES/ERP and partner systems [95,96]. Pilots should be designed to test not only model accuracy but also governance (who can see what), handoffs (who acts on which alert), and partner readiness; lessons then feed into routinization (SOP updates, KPI changes, and training) and scaling (on-boarding more suppliers and lanes). Where organizations follow this adoption pathway, evidence indicates they are more likely to realize the promised resilience mechanisms—fewer blind spots, faster reconfiguration, and synchronized recovery—rather than ending up with analytics “installed but not used”.
3. Research Methodology
This study employs a qualitative, exploratory research design to examine the organizational resources, key challenges, and resilience-related benefits associated with the adoption of BDA in manufacturing resilient supply chains. Qualitative inquiry is well-suited to capture processual, context-sensitive phenomena and lived experience, producing rich insight into how actors mobilize resources, overcome barriers, and embed analytics in everyday work [97]. Semi-structured interviews were used to elicit detailed accounts while allowing probing and follow-up on emergent issues [98,99]. A summary of adopted research methodology is shown in Figure 2.
Figure 2.
Adopted research methodology.
3.1. Research Design and Rationale
Given limited prior process-level research specifically on resources for BDA adoption to enhance SCRes in manufacturing, an exploratory qualitative design enables inductive explanation-building about mechanisms and context [100,101]. Because cross-sectional surveys and meta-analyses typically measure static associations, they struggle to reveal the processual dynamics of adoption (sequencing, decision rights, lineage, and playbooks) that determine whether analytics actually produces resilience in daily operations—hence our qualitative, interview-based design. Semi-structured interviews balance comparability across cases with openness to unanticipated themes, which is appropriate when technologies, governance arrangements, and cross-tier data practices are evolving [98,99].
We chose the Malaysian manufacturing industry to conduct the study. Malaysia provides a theoretically informative setting; the manufacturing base spans electronics, automotive, food and beverages, chemicals, and metal products, with heterogeneous digital maturity across firm sizes and subsectors. Policy initiatives (e.g., Industry4WRD) have explicitly targeted digitalization and analytics adoption in manufacturing, creating variation in readiness and organizational responses that is valuable for analytic generalization and transferability [102]. Studying one country improves contextual coherence for process tracing while enabling cautious transfer to similar emerging and middle-income manufacturing ecosystems.
3.2. Data Collection
This study conducted 16 semi-structured interviews with senior management personnel from Malaysian manufacturing firms. This number surpasses the typical threshold for achieving data saturation, which is generally considered sufficient. For semi-structured interview studies specifically, classic and recent empirical work on saturation provides guardrails. Guest et al. [103] found thematic saturation within the first 12 interviews in a homogeneous sample, which is widely used to benchmark interview designs [103]. Hennink, et al. [104] distinguish code saturation from meaning saturation, reporting that meaning saturation typically required 16–24 interviews, underscoring why studies seeking nuanced conceptualization often go beyond a dozen interviews [104]. Hagaman and Wutich [105] indicate 16 interviews can identify metathemes in relatively homogeneous groups [105]. A recent systematic review by Hennink and Kaiser [106] shows most empirical datasets reached saturation between 9 and 17 interviews (mean = 12–13) when topics were focused and samples were relatively homogeneous [106].
Participants including supply chain managers, operations heads, and production managers were selected through purposive sampling due to their direct involvement in data-driven decision-making and supply chain transformation initiatives [107]. In addition, snowball sampling was used to identify further qualified informants, enhancing the richness of the dataset [108,109]. Participants profile is mentions in detail in Table 3.
Table 3.
Details of the interview participants.
We set the inclusion criteria to reach most appropriate participants to obtain valuable insights. Participants met all of the following: (i) managerial responsibility in supply chain, operations, production, analytics, or IT; (ii) minimum ten years experience in the manufacturing industry in a relevant role; (iii) direct involvement in at least three BDA initiatives dealing with supply chain planning/operations (e.g., demand/supply analytics, quality/condition monitoring, logistics, and risk/visibility); and (iv) current employment in a Malaysian manufacturing firm. Furthermore, to limit same-network bias, we (a) seeded referrals in diverse firms (electronics, food, metals, and chemicals) and sizes (medium/large); (b) capped referrals to one nominee per participant; and (c) prioritized nominations that increased diversity by function or tier [110].
Data were collected from January–April 2025 via secure video-conferencing. Following the five-phase development framework of Kallio et al. [99], we prepared an interview guide (with sample probes) which is provided in Appendix A for replicability. Each interview lasted 45–60 min, was audio-recorded with consent, and professionally transcribed verbatim. Participants received an information sheet and provided e-consent prior to scheduling. To facilitate candor, we informed participants that no company names or personally identifying details would be reported.
We tracked saturation systematically using a running codebook and a simple accrual analysis. Code saturation (no substantive new first-order codes) was reached at Interview 14; Interviews 15–16 were conducted to confirm meaning saturation (no new dimensions/nuances of existing codes) and to test theme stability across subsectors [103,104,106,111]. Our stopping criterion was met when the incremental code gain fell below 5% across two consecutive interviews and no new relationships among codes were identified.
3.3. Data Analysis
In line with best practices for qualitative inquiry [112], the transcribed data were analyzed using thematic analysis. Following Braun and Clarke’s six-phase framework [113], the analysis involved the following: (1) familiarization with data, (2) generation of initial codes, (3) searching for themes, (4) reviewing themes, (5) defining and naming themes, and (6) producing the final report.
Coding was conducted manually to allow deep immersion and interpretation of meaning. Manual coding was selected to maximize close engagement with a compact dataset; CAQDAS can assist organization but does not replace analytic judgment or produce themes [114,115]. Key codes and themes were iteratively refined through constant comparison across interviews. To enhance reliability, two researchers independently coded a subset of transcripts to develop and calibrate the codebook; discrepancies were resolved through discussion and memoing to refine code definitions and decision rules. Themes were then categorized under the following three primary domains: Resources, Challenges, and Benefits—each aligned with the research questions.
We did not compute intercoder reliability statistics; instead, we emphasized transparent documentation, reflexivity, and negotiated consensus as markers of quality [116,117]. Audit trail and transparency were maintained throughout the research process through (i) versioned interview protocols; (ii) codebook iterations with change logs; (iii) analytic memos and theme development notes; (iv) a reflexive journal; and (v) decision log linking analytic moves to evidence—supporting credibility, dependability, and conformability [118,119,120]. All participants were informed about the purpose of the study; further ethical approval was obtained from the MMU Research Ethical Committee, and data confidentiality was strictly maintained.
4. Findings
Consistent with the study aims, we organize the results around the following three research questions: RQ1 (organizational resources for BDA adoption), RQ2 (challenges and barriers during adoption), and RQ3 (resilience-related benefits realized after adoption). All extracted resources, barriers and benefits are summarized in Table 4. We clarify how these elements function within a process theory of governed adoption. Specifically, each item maps to one of five generative mechanisms—Comparability, Explainability, Authorization, Fidelity, and Executability—that operate gate-conditionally across adoption stages (data plumbing → descriptive monitoring → predictive alerting → prescriptive decisioning). For transparency and testability, under selected items, we note the linked mechanism, the relevant gate(s), and the decision-latency outcome affected (resilience timers: Time-to-Detect, Time-to-Decide, Time-to-Reconfigure, and Time-to-Recover), and we provide a brief deviant-case where the mechanism is absent (e.g., “notification without action”).
Table 4.
A summary of the thematic synthesis of resources, barriers, and benefits for BDA adoption.
4.1. RQ1: Organizational Resources for BDA Adoption
Through the thematic analysis of the interview transcripts, several key themes emerged. These themes represent core enablers that influence the success of BDA implementation, which were consistently emphasized by participants as the foundational requirement. Below, each resource is tagged with its operating mechanism (M1–M5), adoption gate(s) (G1–G4), and the timer it primarily influences.
4.1.1. Technological Infrastructure
Technological infrastructure emerged as a high-salience enabler across cases, repeatedly described as the foundation that turns BDA from a concept into routinized practice for resilience. Participants emphasized that value flowed less from “owning tools” than from integrated and scalable systems—cloud services, IoT capture at the shop floor, and ERP/MES connectivity feeding a unified data layer—that lowered data latency and raised coverage for decision-making. As one senior SC manager reflected,
“We started collecting sensor data from our production lines, and with the help of a cloud-based platform, we enabled live analytics. That changed how we looked at our operations—BDA became an actual capability, not just an idea”.
A production lead similarly linked integration to execution speed as follows:
“Our procurement and warehouse systems are now connected through a unified data layer. That’s what allows our analytics models to run without delay and supports real-time decisions”.
Importantly, deviant evidence tempered this consensus; several firms invested in modern platforms yet reported limited resilience gains where integration remained partial (parallel spreadsheets persisted), data ownership and quality rules were unclear (outputs were distrusted), or frontline routines were not adjusted to act on alerts. In short, infrastructure is necessary but not sufficient; resilience improvements materialize when the stack is interoperable, governed, and coupled to clear “who acts/when/how” procedures.
Linking infrastructure to the resilience lifecycle, interviewees consistently tied (i) prediction to high-coverage, low-latency data capture, and standardized pipelines for early warning; (ii) response to real-time system connectivity that makes re-scheduling, re-routing, and inventory rebalancing executable within minutes; and (iii) recovery to auditable stores and lineage that accelerate root-cause analysis, targeted recalls, and ramp-up. In our process account, these same technical moves instantiate generative mechanisms at specific adoption gates as follows: harmonized identifiers and definitions across ERP–MES–WMS during data plumbing and descriptive monitoring (G1–G2) create Comparability, reducing reconciliation work and thereby shortening Time-to-Detect; end-to-end lineage embedded in reports at G2 strengthens Explainability, which speeds root-cause isolation and lowers Time-to-Recover; and, where predictive alerts at G3 are coupled to prescriptive hooks that open tickets in ERP/MES/WMS at G4, Executability is achieved, compressing Time-to-Reconfigure. By contrast, where IDs are not harmonized, lineage is missing, or alerts lack an authorized actor, firms described “dashboards for information only” and notification without action, with decisions delayed while ownership or approvals were clarified—an observable stall that lengthens Time-to-Decide despite substantial tooling.
4.1.2. Skilled Human Capital
Participants framed skilled human capital as the decisive lever that converts data and tools into resilient action. What mattered most was not narrow technical prowess alone, but hybrid proficiency—people who understand analytics and the operational context can interpret signals and can mobilize cross-functional responses. As one production head noted,
“We invested in software tools and sensors, but the real progress came when we had people who could read the data, make sense of it, and convert it into action”.
Firms that prioritized structured upskilling and cross-functional learning reported smoother embedding of BDA into daily routines, as follows:
“We started training sessions last year for our planning and logistics team on basic analytics tools, just so they can start thinking in a data-driven way. When your own people are confident in the data and can explain it, it builds trust across the organization and pushes others to use it”.
In our process account, these human elements instantiate generative mechanisms at specific adoption gates. Most prominently, when line supervisors, planners, and schedulers are named owners of alerts and hold the associated decision rights (during predictive alerting at G3), Authorization is achieved as follows: alerts convert into authorized actions without escalatory delay, compressing Time-to-Decide. Where training also encompasses data stewardship routines during descriptive monitoring (G2)—such as validating definitions, checking timeliness/accuracy, and resolving exceptions—human capital supports Fidelity, preserving signal quality so previously gained detection/decision improvements do not decay over time. In teams that couple G3 decision rights with G4 system hooks (e.g., the authority and know-how to trigger re-sequencing, re-routing, or substitute BOMs in ERP/MES/WMS), skilled staff additionally enable Executability, which shortens Time-to-Reconfigure.
A contrasting pattern clarifies the boundary condition as follows: where competence is concentrated in a small central team or outsourced vendor—without line ownership, role-based access, or stewardship routines—participants described how there were “analytics installed but not used”. In such cases, frontline staff waited for ad hoc approvals, alerts accumulated, and decisions were postponed while authority was clarified—an observable stall that lengthened Time-to-Decide despite available tooling. Thus, skilled human capital functions as an enabler when hybrid skills are paired with explicit ownership and access at G3 and stewardship at G2; it becomes a barrier when those elements are absent, limiting the translation of analytical insight into timely, coordinated action across the resilience lifecycle.
4.1.3. Organizational Culture and Data-Driven Mindset
Participants consistently portrayed culture as the multiplier that determines whether BDA is acted upon or ignored. The emphasis was on a data-driven mindset that spans executives to frontline staff—normalizing the use of dashboards, alerts, and evidence in place of intuition and creating psychological safety to experiment and learn from misses. As one operations manager put it,
“It’s not only about having the system; it’s about changing the way we think. Top management and even middle managers need to rely on data more than gut feeling”.
Another manager described how visible leadership behaviors reinforce adoption:
“We are moving from gut-based decisions to dashboard-driven insights. That shift in mindset has made our teams more responsive and confident in handling uncertainties. Supply chain issues are rarely isolated. We need input from finance, operations, and even marketing. Our data-sharing culture helps us respond faster and align strategies”.
Analytically, interviewees differentiated culture from (yet connected it to) leadership and governance as follows: leadership signals expectations (e.g., asking for evidence in reviews), governance enables access and stewardship, while culture shapes day-to-day habits—who checks which metrics, when exceptions are escalated, and whether people trust and act on model outputs. In our process account, these cultural habits instantiate generative mechanisms at specific adoption gates. Where teams routinely align on common definitions and verify numbers during data plumbing and descriptive monitoring (G1–G2), they enact Comparability; shared identifiers and disciplined metric use reduce reconciliation debates and shorten Time-to-Detect. When managers expect dashboards to show source-to-report provenance and encourage questions about how a figure was produced (G2), they foster Explainability as follows: lineage becomes a normal part of reviews, which speeds root-cause isolation and lowers Time-to-Recover. Crucially, a culture that treats alerts as work to be done—with named roles empowered to act—supports Authorization at predictive alerting (G3) as follows: decisions are taken within role-based guardrails rather than escalated ad hoc, compressing Time-to-Decide. Finally, when teams are accustomed to closing the loop in operational systems (e.g., issuing work orders, re-sequencing, or re-routing through ERP/MES/WMS), they normalize Executability at prescriptive decisioning (G4), which reduces Time-to-Reconfigure. These routines are sustained by Fidelity norms (G2–G3)—blameless post-event reviews, stewardship checklists, and recognition for data care—that maintain signal quality so earlier timer gains do not decay.
A minority of respondents offered a deviant pattern that clarifies the culture’s boundary condition as follows: where norms privileged intuition, data were hoarded between functions, or where fear of blame discouraged querying numbers, teams reported that dashboards became peripheral, lineage was rarely inspected, and alerts lacked clear owners. In such settings, people hesitated to act without senior approval, and cross-functional coordination arrived late; the observable results were longer reconciliation debates (eroding Time-to-Detect) and notification without action (stretching Time-to-Decide) despite adequate tools. In short, culture functions as an enabler when it embeds disciplined metric use, provenance questioning, role-based action, and blameless learning into daily work; it becomes a barrier when those norms are absent, leaving analytical insight untransformed into timely, coordinated action across the resilience lifecycle.
4.1.4. Strategic Investment in Analytical Tools
This theme concerns what firms choose to buy and in what sequence—as opposed to the connectivity “plumbing” covered under infrastructure. Participants emphasized aligning the tool portfolio with clear resilience use cases and with user-adoption realities (time-to-value and ease of use), not merely with technical novelty. As one manager put it,
“Just having the data is not enough; we needed to invest in software that helps make sense of it”.
Another respondent stressed fit and learnability over customization as follows:
“The challenge was not just buying the tool, but choosing the right one that our team could learn and use effectively without too much customization”.
Analytically, the following three tool archetypes dominated adoption trajectories: (i) BI/control-tower dashboards and alerting for internal and cross-tier visibility; (ii) forecasting/risk models (statistical/ML) for early warning; and (iii) prescriptive tools (optimization/simulation) embedded in S&OP and exception management. Across cases, the most common sequence was BI/alerts → forecasting/risk scoring → prescriptive planning, with off-the-shelf connectors to ERP/MES/WMS used where possible to shorten time-to-value.
In our process account, the payoffs from this sequencing arise when specific adoption gates and generative mechanisms are deliberately targeted. During data plumbing and descriptive monitoring (G1–G2), BI/control-tower investments that standardize metric definitions and reference data instantiate Comparability, reducing reconciliation debates and shortening Time-to-Detect. As dashboards begin to display source-to-report provenance (lineage fields and drill-through to upstream systems), they also build Explainability at G2, which accelerates root-cause isolation and lowers Time-to-Recover. When firms add forecasting and risk scoring at G3, tools create value only if alerts are bound to named owners and role-based decision rights; that coupling enables Authorization, turning notifications into authorized actions within service-level windows and thus compressing Time-to-Decide. Finally, prescriptive tooling delivers its intended benefit when integrated with operational systems (e.g., automatically opening work orders, re-sequencing jobs, or initiating carrier switches through ERP/MES/WMS); this closes the loop as Executability at G4 and reduces Time-to-Reconfigure. These timer gains persist when stewardship workflows accompany the tool rollout (checks for timeliness, accuracy, and completeness), sustaining Fidelity so performance does not decay post-pilot.
A contrasting pattern clarifies the boundary condition. Several firms reported “tool-first” purchases that lacked standardized metrics in BI, had opaque forecasting pipelines with no lineage, and produced alerts without owners or access to act—leading to dashboards for information only and notification without action. In practice, users reverted to spreadsheets or waited for ad hoc approvals; recommendations required manual re-keying into ERP/MES, and the promised benefits of prescriptive modules did not materialize. The observable effect was longer reconciliation cycles (eroding Time-to-Detect), decision delays while authority was clarified (stretching Time-to-Decide), and slow translation of recommendations into system changes (raising Time-to-Reconfigure) despite significant tool spend. Put differently, strategic investment in tools enables resilience when tools are selected and sequenced to activate the right mechanism at the right gate, with minimal customization, clear ownership, and native connectors that make action executable within the operating systems where work is actually done.
4.1.5. Clear Objectives and Outputs for BDA Adoption
Interviewees emphasized that BDA adoption advanced fastest when teams began with a question-first approach—precisely defining the decision problems, expected outputs, and success metrics—rather than starting from available tools or data. As one supply chain manager stressed,
“You need to have clarity—what data you want, why you need it, and what exactly you expect it to deliver in terms of insights”.
Another respondent underscored sequencing and fit as follows:
“You don’t start with the tools. You start with the question you want answered. Then you find the data and tool that fits”.
This clarity facilitated cross-functional alignment (who owns which decision), streamlined execution (fewer scope changes), and reduced dashboard proliferation.
In our process account, goal clarity activates specific mechanisms at distinct adoption gates. During data plumbing and descriptive monitoring (G1–G2), explicit KPI definitions, thresholds, and early-warning targets create Comparability; numbers mean the same thing across sites and functions, reconciliation debates fall, and Time-to-Detect shortens. When output specifications also require source-to-report drill-through, teams normalize Explainability at G2; provenance becomes part of routine reviews, enabling quicker root-cause isolation and lowering Time-to-Recover. Crucially, when objectives are framed as decisions with owners and SLAs rather than as “insights”, teams establish Authorization at G3 as follows: alerts are routed to named roles with bounded action rights, so the alert-to-action interval compresses and Time-to-Decide improves. Finally, objectives that are expressed as executable options (e.g., re-routing rules, re-sequencing logic, and BOM substitution policies) and mapped to ERP/MES/WMS transactions enable Executability at G4, which reduces Time-to-Reconfigure. Where this discipline is sustained through KPI stewardship (periodic recalibration, exception handling, and post-event learning cycles), teams preserve Fidelity so timer gains do not erode after pilots.
Deviant cases illustrated the boundary condition. Vague or shifting objectives yielded “vanity dashboards” and tool sprawl as follows: metrics were added without shared definitions, forecasts lacked acceptable-use thresholds or SLAs, and alerts circulated without a designated actor. In practice, teams debated numbers before acting (extending Time-to-Detect), waited for ad hoc approvals (stretching Time-to-Decide), and manually re-keyed recommended actions into operational systems (raising Time-to-Reconfigure). Thus, clear objectives and outputs function as enablers when they translate questions into standardized metrics, called decision rights, and executable playbooks; they act as a barrier when clarity is absent, leaving analytical insight unconverted into timely, coordinated action across the resilience lifecycle.
4.1.6. Top Management Support
Participants consistently depicted senior leadership as the accelerator that moves BDA adoption from isolated pilots to organization-wide practice. Visible sponsorship aligned analytics with corporate priorities, unlocked budgets and headcount, and reduced cross-functional friction. As a supply chain director observed,
“Our analytics journey really took off once the CEO started asking data-driven questions. It sent a clear message to all departments that BDA was not optional—it was a priority”.
Leadership also de-risked experimentation by funding staged pilots and allowing iteration as follows:
“Our leadership allowed us to test analytics solutions in one plant before scaling. That helped us learn without pressure. Without that support, we wouldn’t have tried anything new”.
In our process account, top management shapes which generative mechanisms are activated at each adoption gate. During data plumbing and descriptive monitoring (G1–G2), executives who mandate common KPI definitions, harmonized identifiers, and routine dashboard reviews create Comparability as follows: numbers mean the same thing across sites, reconciliation debates shrink, and Time-to-Detect shortens. When leadership requires source-to-report drill-through and accepts scrutiny of the numbers in business reviews, Explainability becomes normal practice at G2, enabling faster root-cause isolation and lowering Time-to-Recover. Critically, resilience gains hinge on action rights; senior leaders who formalize who acts/when/how on specific alerts and delegate bounded authority at predictive alerting (G3) establish Authorization, converting notifications into authorized actions within service windows and compressing Time-to-Decide. Finally, executives who prioritize wiring recommendations into ERP/MES/WMS—funding native connectors, playbooks, and guardrails—complete the loop as Executability at G4, reducing Time-to-Reconfigure. These effects are sustained when leadership backs stewardship roles, cadence reviews, and blameless post-event learning, preserving Fidelity so earlier timer gains do not decay.
A contrasting pattern clarifies the boundary condition. Several interviewees described “symbolic” sponsorship—rhetoric without aligned budgets, no decision-rights charter, shifting priorities, or delays in system integration. In those settings, metrics were disputed across functions (eroding Time-to-Detect), lineage was rarely examined, alerts circulated without a designated actor (stretching Time-to-Decide), and recommended actions required manual re-keying (raising Time-to-Reconfigure). Teams eventually reverted to manual routines—analytics installed but not used. In short, top management support functions as an enabler when it pairs messaging with concrete mandates (standards and lineage), delegated action rights (ownership and access), and funded system hooks; it becomes a barrier when those elements are absent, leaving analytical insight unconverted into timely, coordinated action across the resilience lifecycle.
4.1.7. Data Governance and Quality Management
Interviewees characterized data governance and quality management as a strategic resource for BDA adoption, centered on five elements—ownership, standards, stewardship, access, and lineage. First, ownership clarifies decision rights over data domains (e.g., item master, supplier, order, production events) and assigns accountability for fitness-for-use. Second, standards (schemas, code lists, master-data definitions, and timestamp conventions) make data interoperable across ERP/MES/WMS/TMS and partner interfaces so analytics outputs are comparable and trusted. Third, stewardship designates custodians who monitor quality dimensions (accuracy, completeness, timeliness, and consistency) against SLAs and who resolve exceptions through agreed workflows. As one director noted,
“We created a cross-functional data governance team to define data standards and ensure quality across our systems. That investment paid off, especially when we needed real-time insights during COVID-related disruptions”.
Fourth, access policies (role-based permissions, least-privilege, data contracts, and audit logs) ensure the right people—and partners—see the right data at the right time without compromising security. Fifth, lineage (source-to-report traceability of fields and transformations) underpins explainability, recall readiness, and regulatory compliance, particularly when models are embedded in operations. A governance lead summarized the risk as follows:
“With multiple vendors and digital partners, we can’t afford loose ends. We need to know how our data is managed, secured, and used—otherwise, BDA becomes a liability instead of an asset”.
In our process account, these governance choices activate generative mechanisms at specific adoption gates. During data plumbing and descriptive monitoring (G1–G2), shared identifiers, definitions, and timestamp conventions create Comparability: numbers align across systems and sites, reconciliation effort falls, and Time-to-Detect shortens. As dashboards expose source-to-report drill-through and retain transformations, Explainability is established at G2 through lineage, enabling faster root-cause isolation, targeted quarantines, and compliant recall reporting—lowering Time-to-Recover. When ownership is coupled with role-based access and explicit alert SLAs at predictive alerting (G3), organizations achieve Authorization; notifications are routed to named roles with bounded rights, so action proceeds without ad hoc escalation and Time-to-Decide compresses. Stewardship routines (exception queues, timeliness/accuracy checks, and remediation workflows) preserve Fidelity across G2–G3, preventing the post-pilot drift that often erodes earlier timer gains. Finally, governance of access and data contracts with partners conditions Executability at prescriptive decisioning (G4) as follows: when upstream ETA, capacity, and quality signals are contractually available and mapped to operational transactions (ERP/MES/WMS), recommendations translate into executable system actions, reducing Time-to-Reconfigure.
Deviant cases clarified the boundary conditions. Where ownership was ambiguous, standards uneven, stewards unappointed, or access informal, teams reported cross-plant metric disputes, distrust of dashboards, and “notification without action” as alerts circulated without a designated actor. Recommended changes were re-keyed manually into operational systems, partner data feeds were delayed or incomplete, and incident reviews lacked provenance—patterns that lengthened Time-to-Detect, stretched Time-to-Decide, and slowed Time-to-Reconfigure and Time-to-Recover despite adequate tooling. In short, when ownership, standards, stewardship, access, and lineage are explicitly defined and enforced, data volume becomes decision-grade information that teams can act on at speed; when these governance elements are ad hoc, adoption stalls and resilience gains remain modest.
4.1.8. External Collaboration and Information-Sharing Culture
Participants stressed that BDA reaches its resilience potential when information flows extend beyond the focal firm to suppliers, logistics providers, and distributors. External links (APIs/EDI, partner portals, and shared dashboards) enabled the joint monitoring of inventories, ETAs, and risk signals, which in turn supported earlier warnings and coordinated action. As one supply chain manager noted,
“We built an API-based link with our key suppliers and logistics companies… That’s the kind of visibility BDA makes possible—when everyone’s connected”.
Internally, data democratization was also critical so that functions could view and interpret the same signals, outlined as follows:
“If only the data team has access and others can’t view or understand it, then what’s the point of calling it a data-driven system?”
Interviewees described practical enablers such as shared control-tower views, common alert definitions, and “data contracts” with partners (fields, refresh rates, and escalation paths), which reduced reconciliation work and made analytics outputs actionable across tiers. In our process account, these arrangements activate generative mechanisms at specific adoption gates. During data plumbing and descriptive monitoring (G1–G2), partner agreements on identifiers, units, and event timestamps establish Comparability across firms as follows: signals align, cross-company reconciliation drops, and Time-to-Detect shortens. When shared dashboards include provenance (e.g., drill-through to carrier events and supplier confirmations) and retain cross-firm transformation trails, inter-organizational Explainability emerges at G2, which accelerates joint root-cause isolation and lowers Time-to-Recover. Critically, collaboration yields operational benefit only when alerts are bound to clear inter-firm decision rights—who acts, within what window, and with what bounded authority. Formalized escalation paths and role-based access across company boundaries enable Authorization at G3, converting notifications into authorized actions without waiting for senior sign-offs and compressing Time-to-Decide. Finally, where APIs/EDI are configured to open or update transactions across systems (e.g., carrier rebooking, priority allocation, and drop-ship releases), recommendations become Executable at G4 across the network, reducing Time-to-Reconfigure. These effects are sustained when partner data contracts include quality SLAs and exception workflows, reinforcing Fidelity so performance does not erode after pilot collaborations.
The theme’s boundary conditions were clear. Several firms reported that collaboration stalled when partners were reluctant to share (IP/privacy concerns), interfaces were incompatible, or refresh cadences were too slow for operational use. In such deviant cases, teams reverted to email and spreadsheets; metrics were reconciled ad hoc, alerts circulated without a designated external actor, and recommended actions required manual re-keying—patterns that lengthened Time-to-Detect, stretched Time-to-Decide, and slowed Time-to-Reconfigure despite robust internal tooling. Where collaboration did function, the benefit–stage linkage was consistent, as follows: for prediction, cross-tier feeds (supplier capacity updates, in-transit milestones) improved early-warning lead times; for response, synchronized alerts and delegated rights supported rapid re-routing, prioritized allocations, and plan re-sequencing across firms; and for recovery, shared traceability and post-event logs accelerated joint root-cause analysis and backlog clearance. Overall, external collaboration and information sharing acted as the connective tissue that turns firm-level analytics into network-level resilience, provided that access, refresh cadence, and accountability are formalized with partners.
4.1.9. Analytical Capability and BDA Maturity
Participants described a clear progression from “analytics available” to “analytics in use”, where models, alerts, and scenarios are embedded into day-to-day planning and exception management. The emphasis was on operational embedding—regular cadence reviews, alert-to-action rules, and scenario playbooks—rather than on tools alone. As one manager noted,
“What most companies do when they purchase analytics software is believe that that is the answer. However, unless you are familiar with how to put the correct questions and read the results, the software are not fully used”.
Firms that matured beyond descriptive reporting consistently linked analytics outputs to who acts, when, and how; an executive explained,
“We moved beyond reporting. Our systems now trigger automated alerts and scenario simulations. It’s not just about dashboards—it’s about integrating analytics into how we respond and plan”.
In our process account, maturity reflects which mechanisms are reliably activated at each adoption gate and kept in tune over time. During data plumbing and descriptive monitoring (G1–G2), cadence reviews that standardize KPI definitions, tune thresholds, and retire stale metrics sustain Comparability and make exception signals cleaner, cutting reconciliation debates and shortening Time-to-Detect. As teams incorporate source-to-report drill-through into those reviews, Explainability remains visible at G2, supporting faster root-cause isolation and lowering Time-to-Recover. Maturity becomes most evident at predictive alerting (G3), where alerts are routinely bound to named owners, service-level targets, and playbooks—hallmarks of Authorization—so decisions are taken within role-based guardrails rather than escalated ad hoc, compressing Time-to-Decide. Finally, prescriptive planning is considered “mature” when recommended options (re-sequencing, re-routing, prioritized allocations, and BOM substitutions) are executed through the operating systems (ERP/MES/WMS) with minimal manual re-keying—evidence of Executability at G4 that reduces Time-to-Reconfigure. These gains persist only where teams practice Fidelity routines across G2–G3—monitoring timeliness, accuracy, and completeness; recalibrating thresholds and models; and clearing exception queues—so that signal quality and actionability do not decay after pilots.
Respondents pointed to three hallmarks of this embedded maturity. First, cadenced use: models and dashboards are reviewed on fixed rhythms (e.g., daily control-tower huddles; weekly S&OP), with thresholds adjusted from lived disruptions to reduce false alarms and missed detections. Second, alert-to-action coupling: each alert routes to a named owner with a target response time (e.g., re-sequencing within X hours; re-routing within Y hours) and a bounded set of executable options, turning notifications into authorized actions. Third, closed-loop learning: post-event reviews update definitions, thresholds, and scenarios, preventing recurrence and improving model performance while reinforcing stewardship responsibilities.
Deviant cases clarified the boundary conditions. Where alerts lacked owners, thresholds were untuned, lineage was rarely inspected, or meetings focused on lagging KPIs rather than exceptions, teams reverted to spreadsheets and ad hoc coordination. In practice, numbers were debated before action (eroding Time-to-Detect), decisions waited for senior approval (stretching Time-to-Decide), and recommended changes were re-keyed manually into ERP/MES (raising Time-to-Reconfigure), so recovery trajectories lengthened despite adequate tooling. Overall, analytical capability and BDA maturity were described as an accumulated resource, built through repeated use, cross-functional ownership, and continuous tuning—turning analytics from static reports into a dependable operating mechanism during disruption.
4.2. RQ3: Benefits Realized After BDA Adoption
Based on the findings of the interviews, a number of potential benefits were highlighted with the adoption of BDA technology for resilient supply chain in manufacturing industry. Reported benefits are linked to the underlying mechanism(s) and the resilience timer(s) improved.
4.2.1. Internal End-to-End Supply Chain Visibility and Real-Time Monitoring
Interviewees consistently described end-to-end visibility and real-time monitoring as the earliest, most tangible payoff of BDA adoption. Rather than a generic “single view”, respondents emphasized design features that make visibility actionable, as follows: harmonized identifiers and timestamps across ERP/MES/WMS/TMS so events align; near–real-time refresh cadences that reduce signal lag; and exception-driven dashboards with clear thresholds and named owners. In our process account, these design choices activate specific generative mechanisms at early adoption gates. During data plumbing and descriptive monitoring (G1–G2), shared keys, definitions, and disciplined thresholds establish Comparability, cutting cross-system reconciliation and shortening Time-to-Detect. When dashboards also provide drill-through to sources and retain transformation trails, Explainability emerges at G2, enabling faster isolation of the reason a KPI shifted and reducing Time-to-Recover. Where alerts are routed to named roles with bounded action rights, teams achieve {textbfAuthorization at predictive alerting (G3), so exception handling proceeds without ad hoc escalation and Time-to-Decide compresses.
These configurations enabled timely, data-driven insight into inventory positions, work-in-process, supplier performance, and logistics flows—supporting proactive decisions and faster responses to volatility. As one participant noted,
“We use a shared analytics platform with suppliers and customers in real time. If a shipment gets stuck upstream, our local line is alerted right away. That kind of transparency didn’t exist before”.
Operationally, visibility improved forecast monitoring and shortage anticipation (prediction), accelerated re-sequencing, re-routing, and dynamic inventory rebalancing when exceptions occurred (response), and supported quicker backlog clearance and ramp-up planning by pinpointing constraints and recovery trajectories (recovery). Typical indicators cited by participants included the following: percent of flows on real-time feeds; alert lead time before potential stockouts; reduction in manual reconciliations; schedule-adherence improvements following exception handling; and cycle-time from alert to action.
A minority cautioned that visibility fails to translate into resilience when engineered as a reporting layer rather than an operating mechanism. Deviant cases described slow-refresh or stale data, inconsistent keys across systems, and dashboards without explicit ownership—conditions that produced reconciliation debates and notification without action. In practice, teams reverted to email and spreadsheets, lengthening Time-to-Detect while numbers were reconciled and stretching Time-to-Decide while authority was clarified. Where data standards were enforced, thresholds were tuned, response owners and SLAs were explicit, and signals were reviewed on fixed cadences (e.g., daily control-tower huddles), visibility functioned as a dependable trigger for coordinated action rather than a static display.
4.2.2. Predictive Risk Sensing, Early Warning, and Contingency Activation
Interviewees described a decisive shift from reactive firefighting to anticipatory control once BDA enabled continuous risk sensing and early warning tied to pre-armed contingency activation. By fusing real-time sensor streams, partner feeds (ASNs/ETAs), and ERP events with historical patterns, systems flagged material deviations—temperature excursions, idle-time spikes, late shipments, and unusual demand—before they translated into service failures. As one operations manager noted,
“We installed IoT sensors across our production floor and logistics vehicles. The moment there’s a deviation—like temperature exceeding threshold in storage or abnormal idle time in equipment—we receive alerts through our BDA dashboard. This helps us act before small issues become crises”.
A supply lead highlighted upstream actionability,
“One of our suppliers in China experienced flooding, and because our system flagged irregular dispatch patterns early, we preemptively switched to our alternate supplier before delays affected our production”.
In our process account, early warning works when specific mechanisms are purposefully activated across the adoption gates. During data plumbing and descriptive monitoring (G1–G2), harmonized identifiers and event timestamps across ERP/MES/WMS/TMS and partner interfaces create Comparability as follows: fused signals align, reconciliation effort falls, and Time-to-Detect shortens as anomalies are surfaced earlier and with less debate. As dashboards expose source-to-report drill-through (e.g., which carrier milestone or sensor triggered the alert), Explainability at G2 allows teams to quickly verify whether the deviation is real or an artifact, accelerating root-cause isolation and lowering Time-to-Recover. At predictive alerting (G3), early warning converts from notification to action only when alerts are bound to named owners with bounded decision rights and response windows—Authorization—so the alert-to-action interval compresses and Time-to-Decide improves. Finally, contingencies deliver consistent benefits when recommended options (re-sequencing, re-routing, buffer releases, supplier switches) are executed through ERP/MES/WMS connectors—Executability at G4—reducing Time-to-Reconfigure. These gains persist when teams practice Fidelity routines (threshold reviews, drift monitoring, exception clearing), keeping precision/recall stable as operating conditions change.
Practically, firms emphasized the following three building blocks: (i) calibrated risk sensing—composite scores and anomaly detectors with tuned thresholds that prioritize consequential signals; (ii) signal fusion—combining sensor, logistics, and order context to raise alert precision and suppress noise; and (iii) contingency playbooks—each alert type mapped to an owner, a response window, and executable options with service-level expectations. These components “moved decisions left” in time, reduced emergency expedites, and limited disruption spillovers. The merged benefit primarily supports prediction by increasing warning lead time and accuracy (ranked risk queues, fewer false alarms); it improves response by triggering pre-approved actions within hours rather than days (prioritized allocations, mode changes, parameter overrides); and it accelerates recovery by preventing backlog accumulation and enabling faster stabilization after contained incidents.
Participants tracked outcomes such as warning lead time (hours/days before breach), true-positive rate of alerts, alert-to-action cycle time, percentage of alerts acted within SLA, reductions in emergency expedites and avoided stockouts/line stoppages, and time-to-recover on exposed lines/lanes. Reported examples included earlier supplier escalations ahead of weather events, timely temperature interventions in cold chains, and targeted replanning with fewer downstream disruptions.
Boundary conditions were also clear. Benefits diminished when thresholds were untuned (alert fatigue), fused feeds refreshed too slowly to be operationally useful, or alerts lacked clear ownership—producing notification without action. Several interviewees noted model drift as patterns changed, requiring periodic recalibration and stewardship attention. Where teams instituted scheduled threshold reviews, maintained fused signals, assigned accountable owners with SLAs, and linked alerts to systemized actions in ERP/MES/WMS, early warning reliably translated into proactive, coordinated response.
4.2.3. Improved Operational Efficiency and Cost Optimization
Participants described efficiency and cost improvements as a direct consequence of embedding analytics into day-to-day planning and execution. Real-time, granular views across procurement, production, and logistics surfaced waste and delay early, while exception-driven workflows turned those signals into timely actions. As one respondent noted,
“With analytics, we can now visualize every part of our operations—from inbound logistics to final delivery. Idle inventory, delayed suppliers, costly routes—it’s all visible, and we act on it instantly”.
Interviewees highlighted the following concrete levers: automated variance detection (plan vs. actual on throughput, yield, and service), constraint-aware re-sequencing on the shop floor, transport re-routing based on predicted dwell times, and labor reallocation guided by short-interval control dashboards. Predictive maintenance emerged as a prominent sub-theme, especially where sensor data and event logs were used to anticipate failure modes and schedule interventions without disrupting critical orders, as follows:
“We don’t wait for equipment to fail anymore. Analytics alerts us before a breakdown. It’s saved us countless hours and repair costs”.
Inventory optimization (e.g., demand-driven buffers and dynamic reorder points) further reduced carrying costs and obsolescence by aligning stock to forecast accuracy and variability profiles, while network and routing analytics trimmed premium freight and underutilized capacity.
In our process account, these gains appear when specific mechanisms are activated and kept in tune across adoption gates. During data plumbing and descriptive monitoring (G1–G2), standardized identifiers, units, and time bases across ERP/MES/WMS establish Comparability as follows: plan–actual variances are computed consistently across lines and sites, reconciliation debates shrink, and Time-to-Detect shortens for yield/throughput drift and logistics delays. Dashboards that retain drill-through to sources and transformation trails reinforce Explainability at G2, enabling faster isolation of root causes (e.g., a particular machine, shift, lane, or supplier) and lowering Time-to-Recover via targeted remedies rather than blanket actions. At predictive alerting (G3), efficiency benefits materialize when alerts are routed to named owners with financial thresholds (e.g., cost-per-minute, premium-freight caps) and bounded action rights, as follows: this establishes Authorization, compressing Time-to-Decide by turning notifications into authorized, economically prioritized actions. Finally, where prescriptive options are executed through ERP/MES/WMS—auto-creating re-sequencing tickets, initiating carrier switches, releasing buffers, or scheduling maintenance—firms achieve Executability at G4, which reduces Time-to-Reconfigure. These effects persist when stewardship routines maintain signal quality and threshold calibration (Fidelity across G2–G3), so false alarms and model drift do not erode earlier timer gains.
Operationally, efficiency and cost optimization supported prediction through early detection of throughput/yield drift and asset health risks; enabled faster response via dynamic re-sequencing, re-routing, labor reallocation, and targeted expedites only where financially justified; and accelerated recovery by identifying shortest backlog-clearance paths (e.g., bottleneck-focused scheduling, overtime placement) and resetting parameters (safety stocks, cycle times) after disruption. Participants referenced cycle-time reductions on constrained lines; schedule-adherence improvements after exception handling; reductions in emergency expedites and premium freight; longer mean time between failures (MTBF) and lower maintenance cost per unit; and improved forecast-to-stock alignment (days of inventory on hand; stockout rate) and total landed cost per order. Reported outcomes included lower scrap/rework, fewer idle assets, improved truck/container utilization, and measurable OPEX savings.
Boundary conditions were also clear. Efficiency gains diminished when alerts lacked financial prioritization (teams chased low-value variances) when recommended actions were not executable in legacy systems, or when maintenance predictions were not integrated with production planning (creating avoidable downtime); poorly tuned reorder logic increased churn and handling costs. Where firms paired analytics with explicit ownership, financial guardrails, and system hooks that made actions executable, improvements were consistent and sustained rather than episodic.
4.2.4. Collaborative Visibility and Supplier Network Integration
Participants emphasized that BDA’s resilience payoffs scale when visibility extends across organizational boundaries to suppliers, logistics providers, and customers. Shared data feeds (APIs/EDI), partner portals, and synchronized dashboards created a common operating picture—forecasts, inventory positions, ETAs, and exception queues—so parties could sense the same signals and act in concert. As one manager explained, “Beforeanalytics, we were reacting to supplier issues as they came. Now, if a shipment is delayed or a forecast changes, the system alerts both sides. We act together, not separately. That has made all the difference in building trust and performance”. [Operations Manager]. A senior executive added,
“We have integrated analytics with some key vendors. They can see our forecasts and stock status, and we can see theirs. During COVID, this transparency helped us receive supplies on time while others struggled. That mutual insight has become our strength”.
Interviewees highlighted practical enablers behind effective collaboration as follows: partner “data contracts” that define fields, refresh cadence, and escalation paths; harmonized identifiers (item, location, shipment) to align events; and role-based shared alerts (e.g., late ASNs, at-risk ETAs, consumption spikes) with named owners on both sides. In our process account, these arrangements activate specific generative mechanisms at distinct adoption gates. During data plumbing and descriptive monitoring (G1–G2), cross-firm alignment on identifiers, units, and event timestamps establishes inter-organizational Comparability as follows: signals line up across companies, reconciliation work drops, and Time-to-Detect shortens as deviations are recognized simultaneously by partners. When shared dashboards include drill-through to carriers, supplier confirmations, and transformation trails, they normalize cross-firm Explainability at G2, enabling faster joint root-cause isolation and lowering Time-to-Recover. Critically, collaboration delivers operational benefit only when shared alerts are tied to the following clear inter-firm decision rights: who acts, within what window, and with what bounded authority. Formalized escalation paths and role-based access on both sides deliver Authorization at predictive alerting (G3), converting notifications into authorized actions without waiting for senior sign-offs and compressing Time-to-Decide. Finally, when APIs/EDI are configured so that agreed interventions open or update transactions across systems (e.g., split shipments, priority allocations, and carrier rebooking), recommendations become Executable at G4 across the network, reducing Time-to-Reconfigure. These timer gains persist when partner contracts include quality SLAs and exception workflows, reinforcing Fidelity so performance does not erode after initial pilots.
Collaborative visibility strengthened prediction by surfacing upstream capacity constraints and in-transit risks earlier than focal-firm data alone; enabled faster response through joint re-routing, prioritized allocation, and synchronized schedule changes; and accelerated recovery via coordinated ramp-up, shared backlog plans, and aligned parameter resets (e.g., temporary buffers at critical nodes). Participants referenced partner coverage (% of tier-1 suppliers/logistics lanes integrated), ASN–receipt match rates, shared alert resolution time (hours from trigger to joint action), ETA adherence improvement, forecast error at the partner interface, and reduction in concurrent stockouts across sites. Reported outcomes included fewer surprises on inbound materials, lower premium freight, improved OTIF (on-time, in-full), and greater supplier responsiveness during volatility.
Boundary conditions were also clear. Collaboration failed to deliver benefits when partner feeds refreshed too slowly to be operationally useful, when incentives were misaligned (e.g., suppliers penalized for transparency), or when IP/privacy concerns curtailed data scope. In such cases, teams reverted to email and spreadsheets to coordinate exceptions; metrics were reconciled ad hoc, shared alerts lacked a designated external actor, and recommended actions required manual re-keying—patterns that lengthened Time-to-Detect, stretched Time-to-Decide, and slowed Time-to-Reconfigure despite platform availability. Where firms addressed these issues—clarifying data scope and rights, agreeing on refresh SLAs, and establishing shared alert ownership—collaborative visibility functioned as a reliable network-level mechanism for sensing and coordinated action.
4.2.5. Data-Driven Innovation and Strategic Agility
Participants reported that BDA adoption expanded the organization’s ability to adjust and innovate at short notice—moving from retrospective reporting to rapid, evidence-based reconfiguration of plans, networks, and processes. Real-time dashboards, predictive signals, and scenario simulations enabled quick shifts in production mix, purchase priorities, and transport modes when market or supply conditions changed. As one operations manager explained,
“We are no longer relying only on past performance reports; now we use real-time dashboards and predictive models. It helps us change production plans and inventory strategies quickly if we sense something changing in demand or logistics. That agility was never possible before BDA”.
In our process account, strategic agility is realized when the mechanisms that support sensing, deciding, and executing are activated coherently across adoption gates. During data plumbing and descriptive monitoring (G1–G2), harmonized metrics and disciplined thresholds sustain Comparability and surface meaningful shifts (demand, capacity, transit risk) earlier, shortening Time-to-Detect. Dashboards that retain drill-through to sources and transformation trails reinforce Explainability at G2, allowing teams to distinguish genuine change from data artifacts and to target redesign efforts (e.g., setup-time reduction, alternate routings), which lowers Time-to-Recover. At predictive alerting (G3), agility depends on decision rights: when alerts and scenario recommendations are routed to named roles with bounded authority and pre-approved playbooks, Authorization converts detection into timely, coordinated action and compresses Time-to-Decide. Finally, agility translates into network and process changes only when recommended options are executed through ERP/MES/WMS—re-sequencing jobs, switching carriers or modes, triggering alternate sourcing, or releasing buffers—evidence of Executability at G4 that reduces Time-to-Reconfigure. These gains persist where teams practice Fidelity routines (cadence reviews, threshold/model recalibration, exception clearing) so that signal quality and playbooks stay aligned with evolving conditions.
Beyond fast adjustments, interviewees linked analytics to a broader innovation posture. Continuous analysis of sensor data, supplier feeds, and customer demand patterns surfaced opportunities for process redesign (e.g., bottleneck elimination, setup-time reduction), new sourcing arrangements (e.g., qualified alternates, dual sourcing), and—in a few cases—changes in service models (e.g., vendor-managed inventory for critical inputs). Collaboration with partners catalyzed joint problem-solving as follows:
“Some of our best innovations came when we shared our analytics reports with our key suppliers. They were surprised to see how their delays impacted our lead time. Now, we work together using shared dashboards”.
A contrasting account cautioned that “agility” can remain aspirational without decision rights related the following:
“We saw the signals, but approvals for supplier switches or expediting took days. By then the window had passed”.
This deviant pattern underscores that data-driven agility hinges on explicit ownership, pre-approved playbooks, and system hooks; without these, teams debate metrics and wait for ad hoc authorization, lengthening Time-to-Detect and Time-to-Decide and blunting the payoff of simulations. Where objectives, rights, and hooks are in place, strategic agility supports prediction by continuously scanning and triaging signals worth acting on; improves response by enabling rapid re-sequencing, prioritized allocations, mode switches, and parameter overrides; and accelerates recovery by testing multiple rebound paths (e.g., backlog-clearing scenarios, alternate routings) and selecting the least-cost, fastest option based on simulated outcomes.
4.2.6. Competitive Advantage Through Analytics-Driven Positioning
Participants emphasized that BDA adoption strengthens market position by translating operational resilience into customer-facing reliability, speed, and informed commercial choices. Firms reported moving from reactive firefighting to proactive, risk-aware planning that supports more accurate promises to customers, prioritized allocations for high-value orders, and smarter sourcing under volatility. As one executive put it,
“In today’s volatile market, data is a weapon. If you’re not using analytics to make faster and smarter decisions, you’ll be left behind. We’ve seen companies collapse during disruptions simply because they couldn’t predict or react fast enough”.
Another manager described how predictive tools changed commercial posture as follows:
“We were previously reactive, waiting for problems to occur before fixing them. Now we are ahead of the game. With predictive analytics, we can plan better, source more competitively, and adjust pricing based on real-time demand”.
In our process account, advantage emerges when the same mechanisms that drive resilience are coherently activated across adoption gates and made visible to customer-facing processes. During data plumbing and descriptive monitoring (G1–G2), harmonized product, location, and capacity metrics build Comparability, reducing reconciliation cycles and shortening Time-to-Detect for risks that threaten promises; drill-through to sources sustains Explainability at G2, enabling quick justification of revised ETAs and credible recovery plans, which protects goodwill and lowers Time-to-Recover. At predictive alerting (G3), order- and account-linked alerts routed to named commercial and operations owners establish Authorization, compressing Time-to-Decide for prioritized allocations, commit adjustments, and preemptive sourcing. Finally, when prescriptive options (e.g., mode switches, split shipments, alternate sources) execute through ERP/MES/WMS and order-management systems, Executability at G4 reduces Time-to-Reconfigure and makes risk-adjusted commitments deliverable at speed. These gains are sustained by Fidelity routines (partner/data SLAs, threshold and model recalibration), so promise accuracy and service levels do not decay after pilots.
Mechanistically, respondents pointed to three channels through which analytics-driven resilience converts to advantage. First, reliability as a differentiator: earlier risk sensing and coordinated response sustained OTIF during disruptions, improving scorecards with key accounts and enabling stickier relationships. Second, speed and accuracy of commitment: near–real-time visibility and risk-adjusted capacity views enabled faster, more reliable order confirmations and lead-time quotes, shortening quote-to-commit cycles and reducing penalty exposure. Third, smart market moves: merged internal signals (ERP/CRM/SCM) with external intelligence (supplier risk, logistics constraints, demand shifts) supported targeted pricing updates, selective promotions, and rapid entry into defendable niches when competitors faltered.
Competitive advantage accumulated across the resilience cycle as follows: prediction improved promise accuracy and preemptive sourcing decisions; response sustained service levels via prioritized allocations, mode switches, and schedule changes executed within hours rather than days; and recovery shortened TTR and stabilized backlog, protecting customer satisfaction and revenue run-rate. Where decision rights were unclear, connectors to execution systems were missing, or data discipline lapsed, firms reported slower reconciliation, delayed authorizations, and manual re-keying—patterns that lengthened Time-to-Detect, Time-to-Decide, and Time-to-Reconfigure, blunting commercial gains despite analytics availability.
4.2.7. Enhanced Product Traceability and Compliance Management
Interviewees highlighted product genealogy and compliance assurance as a distinct benefit of BDA adoption. By aggregating and linking event data across the chain—raw-material ingress, transformation steps, in-process tests, storage moves, and outbound shipments—teams could trace specific lots/serials end-to-end and act with precision. As one quality leader explained,
“With BDA tools in place, we can now trace every part of a component back to its source. If there’s a defect reported by a customer, we no longer have to shut down an entire batch—we can identify the exact line and even the machine settings that produced the part”.
In regulated segments, this capability was described as essential rather than optional:
“Traceability is not optional. BDA gives us the ability to demonstrate full visibility across our supply chain—from raw material to finished product delivery. That’s a huge asset for both compliance and resilience”.
In our process account, traceability benefits materialize when specific mechanisms are activated across the adoption gates. During data plumbing and descriptive monitoring (G1–G2), consistent identifiers for lots/serials, materials, work orders, and stations—combined with standardized event schemas and mandatory timestamps—create Comparability; genealogy records align across ERP/MES/WMS and partner interfaces, reconciliation effort falls, and emerging quality drift tied to particular materials, lines, or shifts is detected earlier, shortening Time-to-Detect. As dashboards retain source-to-report drill-through and preserve transformation links (e.g., lot splits/merges, rework routes), Explainability becomes routine at G2: engineers can verify the provenance of any KPI or failure signal, isolate root causes more quickly, and prepare auditable narratives for customers and regulators—reducing Time-to-Recover. At predictive alerting (G3), traceability converts from “visibility” to “action” when alerts are routed to named owners with bounded rights for quarantines, stop-ships, parameter resets, and customer notifications—Authorization—which compresses Time-to-Decide. Finally, when prescriptive actions are issued through ERP/MES/WMS (e.g., auto-create hold/release tickets, targeted recall lists, rework orders, supplier lot blocks), recommendations become Executable at G4, reducing Time-to-Reconfigure. These gains persist when Fidelity routines (stewardship SLAs for timeliness/accuracy/completeness, exception queues for missing scans, and calibration of test thresholds) are in place so the genealogy does not decay post-pilot.
Practically, respondents pointed to data features that made traceability actionable: supplier-lot tagging at ingress, automatic capture at handoff points (scans/sensors), linkage of test/inspection results to specific genealogy nodes, and preserved split/merge logic across work orders. These capabilities supported prediction by surfacing early signals of quality drift; enabled rapid response via targeted quarantines, precise customer notifications, and immediate parameter corrections; and accelerated recovery through faster root-cause analysis, focused rework/replacement, and auditable documentation. Indicative metrics included genealogy coverage (% of SKUs/lots with end-to-end trace), scan completion rates at critical control points, mean time from defect signal to quarantine, proportion of holds that are targeted vs. blanket, and cycle time to compile regulator/customer evidence packs.
Boundary conditions were also visible. Benefits diminished when identifiers were inconsistent across systems or tiers, when scans were skipped at handoff points, or when lineage views were not accessible in routine reviews. In such cases, teams reported blanket holds “just to be safe”, manual spreadsheet tracing, and prolonged debates over which figures were correct—patterns that lengthened Time-to-Detect, stretched Time-to-Decide, and slowed Time-to-Reconfigure and Time-to-Recover despite available tools. Where standards, ownership, and access rights were explicit—and holds/releases, notifications, and rework orders were executed through the operating systems—traceability functioned as a dependable mechanism for both resilience and compliance.
4.2.8. Rapid Scenario Planning and Disruption Simulation
Interviewees described scenario planning and disruption simulation as a pivotal benefit of BDA adoption, shifting decisions from reactive firefighting to pre-armed, evidence-based responses. By fusing historical disruption patterns with in-flight signals (e.g., transit milestones, supplier confirmations, and WIP status), teams constructed “what-if” scenarios—floods, strikes, port closures, supplier outages—and quantified service–cost trade-offs for alternative actions. As one manager explained,
“The real advantage of big data is not just knowing what happened, but asking what will happen. We build scenarios using data from past disruptions—like floods or transport strikes—and simulate their impact. This helps us allocate safety stock, assign alternative suppliers, and even modify production schedules before the crisis hits”.
In our process account, scenario value materializes when the mechanisms that support sensing, deciding, and executing are activated coherently across adoption gates. During data plumbing and descriptive monitoring (G1–G2), harmonized product, location, lane, and capacity definitions sustain Comparability, ensuring that exposure counts (SKUs, sites, and suppliers affected) line up across systems and partners; reconciliation effort drops and Time-to-Detect of emerging risk patterns shortens. As dashboards retain drill-through from simulated outcomes back to the underlying assumptions, parameters, and historical analogs, Explainability at G2 lets teams validate whether a scenario reflects reality or model artifacts, accelerating agreement on the right playbook and lowering Time-to-Recover once enacted. At predictive alerting (G3), simulations convert from analysis to action when they are tied to named owners, response windows, and pre-approved playbooks—Authorization—so that, when thresholds trip, the selected option (e.g., supplier switch, split shipments, mode upgrade) is triggered within the service window, compressing Time-to-Decide. Finally, when those options are executable through ERP/MES/WMS and order-management connectors (auto-create re-sequencing tickets, carrier rebooking, buffer releases, PO pulls to alternates), Executability at G4 reduces Time-to-Reconfigure. These gains persist when Fidelity routines keep parameters fresh (lead times, capacities, yields) and review exception logs after each event to recalibrate models and playbooks.
Practically, scenario planning strengthened prediction by quantifying exposure ahead of time (e.g., SKUs/sites at risk given a port closure), accelerated response by turning alerts into ready-to-run options (re-sequencing, re-routing, alternate sourcing, and temporary buffers), and shortened recovery by simulating backlog-clearance paths and parameter resets to restore target service at least cost. Indicative metrics included scenario generation time (minutes from trigger to viable option set), proportion of alerts with an associated executable scenario, alert-to-action cycle time, service preserved on at-risk orders, avoided premium freight, and deltas in Time-to-Reconfigure and Time-to-Recover relative to prior events.
Boundary conditions were also evident. Scenario planning underdelivered when exposure data lacked standardized keys (scenario counts disputed across functions), when parameters were stale (obsolete lead times/capacities yielding infeasible options), or when decision rights and system hooks were not in place (options chosen in meetings but re-keyed manually later). In such cases, teams debated assumptions first (eroding Time-to-Detect), waited for approvals (stretching Time-to-Decide), and executed changes offline (raising Time-to-Reconfigure). Where firms enforced data standards, kept parameters current, assigned accountable owners with SLAs, and wired scenarios to executable transactions, simulations functioned as a reliable, rapid bridge from early warning to coordinated action.
4.2.9. Advanced Supplier Risk Monitoring and Diversification Strategies
Participants reported that BDA materially improved supplier risk management by combining internal performance histories with external signals to surface vulnerabilities earlier and broaden feasible response options. Dashboards and risk “control towers” tracked on-time delivery, lead-time variability, defect rates, capacity updates, and ASN/ETA adherence alongside outside indicators (e.g., financial stress, news/event feeds, and logistics disruptions), creating a ranked queue of suppliers, sites, and parts at risk. As one procurement lead noted,
“Before adopting analytics, we were largely reactive… Now we analyze delivery lead times, quality scores, and even financial risk indicators… we can act fast to avoid delays”.
A second manager described preemptive action based on multi-tier insights as follows:
“We used analytics to evaluate the resilience of our Tier 2 suppliers… we preemptively increased buffer stock and switched to a local supplier… That decision saved us millions”.
In our process account, supplier risk monitoring creates value when specific mechanisms are activated across the adoption gates. During data plumbing and descriptive monitoring (G1–G2), harmonized supplier, part, site, and lane identifiers—plus disciplined timestamp conventions—establish Comparability so internal and external signals align; reconciliation work drops and Time-to-Detect of degradation (e.g., rising lead-time variance, missed ASNs) shortens. As dashboards retain drill-through to sources and transformation logic (e.g., which carrier milestone or financial feed triggered a score change), Explainability at G2 enables rapid validation of whether a risk flag is genuine or an artifact, accelerating root-cause isolation and lowering Time-to-Recover. At predictive alerting (G3), risk sensing turns into action when alerts are routed to named procurement and operations owners with bounded rights (e.g., temporary reallocations, buffer releases, pre-approved expedites)—Authorization—so the alert-to-action interval compresses and Time-to-Decide improves. Finally, diversification strategies deliver consistent benefit when options are executed through ERP/SRM/OMS connectors (e.g., PO pulls to alternates, supplier blocks/unblocks, split releases, and parameter updates) as follows: Executability at G4 reduces Time-to-Reconfigure. These effects persist where stewardship routines maintain data quality and model threshold calibration (Fidelity across G2–G3), preventing drift and alert fatigue.
Practically, supplier risk monitoring strengthened prediction by detecting early degradation and multi-tier exposure; enabled faster response via prioritized expedites, temporary reallocation of orders, activation of qualified alternates, and buffer adjustments; and supported recovery by accelerating requalification/ramp-up plans and focusing development efforts where they compress time-to-recover most. Indicative metrics included warning lead time before supply impact, true-positive/false-alarm rates on supplier alerts, alert-to-action cycle time, percent of at-risk POs reallocated within SLA, reduction in premium freight and stockouts, and deltas in Time-to-Reconfigure and Time-to-Recover across exposed SKUs and sites.
Boundary conditions were also visible. Benefits diminished when partner feeds refreshed too slowly, when supplier identifiers and site codes were inconsistent across systems, or when decision rights and system hooks were absent—producing notification without action and manual re-keying of changes into SRM/ERP. In such cases, teams debated numbers before acting (eroding Time-to-Detect), waited for approvals (stretching Time-to-Decide), and executed reallocations offline (raising Time-to-Reconfigure). Where standards, access rights, and executable integrations were explicit—and risk playbooks specified owners, windows, and cost guardrails—analytics-driven supplier monitoring reliably translated into preemptive, coordinated diversification moves.
4.2.10. Dynamic Capacity Allocation and Adaptive Production Prioritization
Participants described dynamic reallocation of capacity and rapid reprioritization of orders as a prominent benefit of BDA adoption. By linking real-time material availability, WIP status, labor/asset capacity, and inbound ETA risk to scheduling logic in ERP/MES, firms could generate feasible re-sequencing options within minutes rather than days. As one manager explained,
“When the raw material shipment got delayed during the port congestion in Penang, we used analytics to quickly identify which products could still be produced with available inventory and which orders could be rescheduled without affecting our service-level agreements. This kind of agility was impossible without real-time data insights”.
Several interviewees also emphasized demand sensing to tune plans in-flight:
“During the pandemic, traditional forecasting models collapsed. But using big data tools, we integrated real-time distributor data and social signals to sense demand drops and quickly reduced overproduction. It saved millions in inventory holding costs”.
In our process account, this agility emerges when the mechanisms that support sensing, deciding, and executing are activated coherently across adoption gates. During data plumbing and descriptive monitoring (G1–G2), harmonized product/BOM, resource, and location identifiers—together with disciplined time bases—sustain Comparability; plan–actual variances and material/capacity views align across lines and sites, reconciliation effort drops, and emerging bottlenecks are detected earlier, shortening Time-to-Detect. Dashboards that retain drill-through from candidate schedules to underlying constraints (inventory lots, machine calendars, labor rosters, and inbound ETAs) reinforce Explainability at G2, enabling planners to verify feasibility and isolate true blockers, which lowers Time-to-Recover once changes are enacted. At predictive alerting (G3), reprioritization moves from analysis to action when alerts and candidate sequences are routed to named owners with bounded rights (e.g., cross-line swaps, overtime triggers, temporary parameter overrides)—Authorization—so the alert-to-action interval compresses and Time-to-Decide improves. Finally, when accepted options are executed through ERP/MES (auto-create/release work orders, re-sequence queues, update dispatch lists, and adjust supplier POs), Executability at G4 reduces Time-to-Reconfigure. These gains persist where stewardship routines keep thresholds and calendars fresh (Fidelity across G2–G3), preventing drift and last-minute conflicts.
Practically, dynamic capacity allocation supported prediction by combining demand sensing with inbound risk to anticipate future constraints; enabled fast response via immediate re-sequencing, cross-line rebalancing, targeted expedites, and temporary overrides; and accelerated recovery by computing shortest backlog-clearance paths under real constraints and resetting buffers/cycle times after the shock. Indicative metrics included lead time from trigger to feasible sequence, share of alerts with executable options, alert-to-action cycle time, schedule-adherence after re-sequencing, premium-freight/expedite rate, overtime placement effectiveness, and deltas in Time-to-Reconfigure and Time-to-Recover for affected families.
Boundary conditions were visible in deviant cases. Benefits diminished when data standards were uneven (BOM/route mismatches across lines), when ETA feeds refreshed too slowly, or when decision rights and connectors were absent—producing notification without action, manual re-keying of sequences, and late rework of plans. In such settings, teams debated numbers before acting (eroding Time-to-Detect), waited for approvals (stretching Time-to-Decide), and executed changes offline (raising Time-to-Reconfigure). Where standards, ownership, and system hooks were explicit—and capacity moves were tied to pre-approved playbooks—dynamic allocation functioned as a dependable lever for fast, coordinated prioritization.
4.3. RQ2: Barriers/Challenges in BDA Adoption
Despite the transformative potential of BDA for supply chain resilience, implementation is often hindered by organizational and technical frictions. To make failure pathways explicit, we describe each barrier in terms of the mechanism it obstructs, the adoption gate where the stall is most visible, and the decision-latency outcome that worsens. Illustrative deviant-case contrasts clarify how mistimed or missing governance turns analytics into notification without action rather than a dependable operating mechanism.
4.3.1. Lack of Data Integration and Infrastructure Readiness
Interviewees described outdated, siloed, and partially connected IT landscapes as a primary brake on BDA adoption for resilience. Legacy ERP/MES/WMS instances, bespoke shop-floor applications, and partner portals that “do not talk” to one another produced laggy, inconsistent feeds and forced manual stitching of data. As one participant noted,
“We still operate multiple systems that don’t communicate. Sometimes, we end up running analytics manually in Excel”.
An operations lead echoed the integration burden:
“We tried to pilot predictive alerts, but the data pipeline kept breaking—different IDs, different time stamps, and no API from the old MES. By the time we reconciled files, the window to act had passed”.
In our process account, fragmentation at data plumbing and descriptive monitoring (G1–G2) blocks Comparability: identifiers and time bases do not align across ERP/MES/WMS/TMS, so exception signals require reconciliation before they can be trusted. The result is longer Time-to-Detect as teams debate “whose number is right?” rather than acting. Where lineage is also absent or hard to inspect, Explainability at G2 degrades—root-cause isolation slows and Time-to-Recover lengthens because engineers cannot quickly verify whether a spike reflects a real deviation or a transformation artifact. Finally, when connectors from analytics to operating systems are missing, even well-understood recommendations remain offline: Executability at G4 is blocked, raising Time-to-Reconfigure as changes are re-keyed manually into ERP/MES.
Teams reported high reconciliation effort, inconsistent master data, slow alert-to-action latency, and declining dashboard trust—translating into missed early-warning windows, premium expedites, and reversion to email and spreadsheets. A contrasting (deviant) pattern showed that firms could partially mitigate legacy constraints by introducing a minimal integration spine: lightweight middleware for canonical IDs and timestamps, phased API enablement on critical events, and strict standards for master data. Once these elements were in place, detection debates subsided (Time-to-Detect shortened), alert ownership became actionable, and prescriptive options began to execute through ERP/MES (Time-to-Reconfigure fell). Put differently, lack of integration and infrastructure readiness is the mirror image of its enabling resource: modernization and harmonization at G1–G2 are a precondition for turning analytics into timely prediction, coordinated response, and faster recovery.
4.3.2. Skill Gaps and Human Capital Deficiency
A recurrent barrier was the gap between tool acquisition and the hybrid skills (analytics and operations) needed to translate signals into action. Firms reported that analytics literacy was concentrated in small central teams, while planners, schedulers, and buyers—those who must act—lacked confidence with models, thresholds, and exception workflows. As one manager put it,
“Even if we buy the best tools, we still need people who can operate them. Most of our staff are good with ERP, not analytics”.
Retention compounded the problem:
“Even when we find someone with strong data skills, they don’t stay long—tech companies outbid us”.
In our process account, skill gaps obstruct multiple generative mechanisms at distinct adoption gates. During data plumbing and descriptive monitoring (G1–G2), limited literacy around metric definitions and time bases undercuts Comparability; teams debate how to compute a KPI or which timestamp to use, so exception signals require reconciliation before they are trusted and Time-to-Detect lengthens. When line staff are uncomfortable interrogating dashboards for source-to-report drill-through, Explainability at G2 is muted; root causes are inferred by intuition rather than verified, delaying containment and extending Time-to-Recover. The most visible stall appears at predictive alerting (G3): without named owners who understand thresholds, playbooks, and bounded action rights, Authorization fails and alerts accumulate—an observable notification without action pattern that stretches Time-to-Decide. Finally, when data-science outputs are not co-designed with line functions, recommended options are not feasible in ERP/MES or ignore practical constraints (changeovers and certifications), blocking Executability at G4 and raising Time-to-Reconfigure. Absent stewardship routines, models and thresholds drift over time, weakening Fidelity across G2–G3 and eroding earlier timer gains.
Skill deficits slowed prediction (poor threshold calibration and high false positives), weakened response (alerts without owners or playbooks; reversion to spreadsheets), and limited recovery (thin after-action analysis and little parameter tuning). Several participants also noted a disconnect between data scientists and line functions, producing models that were not executable in ERP/MES or that ignored real constraints. By contrast, firms that mitigated this barrier invested in cross-functional upskilling and embedded “translator” roles; they paired training with explicit ownership and role-based access, ran joint design sprints to ensure options were operable in ERP/MES, and instituted stewardship cadences for threshold/model recalibration. Where these practices were present, reconciliation debates subsided (Time-to-Detect shortened), alerts were acted within service windows (Time-to-Decide improved), and recommended changes executed through systems rather than spreadsheets (Time-to-Reconfigure fell), with post-event reviews feeding continuous improvement in Time-to-Recover.
4.3.3. Organizational Resistance to Change and Cultural Inertia
A frequently cited barrier was inertia in shifting from intuition-driven, hierarchical routines to evidence-based, cross-functional decision-making. Skepticism toward analytics—especially among experienced supervisors and middle managers—led to reluctance to trust model outputs or to change long-standing practices. As one supervisor noted,
“We’ve been doing things manually for 20 years. Convincing people that analytics can outperform their gut feeling is not easy”.
Fear of surveillance or job displacement also dampened adoption, with several participants emphasizing that acceptance improved only when teams were involved and early wins were visible:
“Technology isn’t the issue—it’s people. Some staff see analytics as a threat, not a tool. Adoption only improved when we started celebrating small wins and involving them in the process”.
In our process account, cultural resistance obstructs multiple generative mechanisms across adoption gates. During data plumbing and descriptive monitoring (G1–G2), teams that default to “gut checks” over shared definitions and disciplined thresholds undermine Comparability; numbers are debated before they are used, and Time-to-Detect lengthens as reconciliation takes precedence over investigation. When managers are uncomfortable interrogating dashboards for source-to-report drill-through, Explainability at G2 is muted; deviations are dismissed as “bad data”, slowing root-cause verification and extending Time-to-Recover. At predictive alerting (G3), hesitation to act without senior sign-off blocks Authorization: alerts circulate without a designated, empowered owner—an observable notification without action pattern that stretches Time-to-Decide. Finally, reluctance to change established workflows (e.g., re-sequencing, mode switches, and supplier alternates) delays wiring recommendations into ERP/MES/WMS, stalling Executability at G4 and raising Time-to-Reconfigure. Over time, blame-seeking post-mortems discourage open discussion of misses, eroding Fidelity (data care, threshold tuning, and exception clearing) and causing timer gains to decay after pilots.
Respondents noted that resistance was most acute where analytics arrived as a “reporting layer” without role definition or participation in design. By contrast, firms that attenuated cultural inertia paired visible leadership with practical activation rules: (i) standardize the numbers, localize the thresholds—shared KPIs and IDs at G1–G2, with team-tuned limits for their context; (ii) pre-authorize bounded actions—clear playbooks at G3 (who acts/when/how within cost and risk guardrails), so action does not wait for ad hoc approval; (iii) close the loop in systems—G4 hooks that execute changes through ERP/MES/WMS so wins are tangible; and (iv) blameless learning—cadenced post-event reviews that update definitions, thresholds, and scenarios without punitive framing. Where these practices were adopted, participants reported fewer debates over “whose number is right?” (Time-to-Detect shortened), fewer stalled alerts (Time-to-Decide improved), and faster translation of decisions into system changes (Time-to-Reconfigure fell), with recovery trajectories stabilized by routine, evidence-based learning.
4.3.4. Poor Data Quality and Governance
Beyond system connectivity, interviewees emphasized a pervasive barrier of data fitness-for-use: inaccurate, incomplete, untimely, and inconsistent records spread across spreadsheets, legacy ERPs, and departmental tools undermined analytics credibility and slowed action. As one planner explained,
“We do have data—but it sits in different systems: Excel, legacy ERP, even paper reports. When we try to combine them, it’s a nightmare. You can’t trust whether the numbers are accurate or current”.
A second participant pointed to heterogeneous practices,
“Every department collects data in its own way—some with SAP, others with homegrown systems, and some still manually. Before big data, we need to clean and unify our sources”.
In our process account, poor quality and weak standardization obstruct multiple generative mechanisms at early adoption gates. During data plumbing and descriptive monitoring (G1–G2), inconsistent masters (items, suppliers, and locations), mixed units, and uneven timestamp conventions break Comparability: the same KPI yields different values across systems and sites, so exception signals must be reconciled before they are trusted, lengthening Time-to-Detect. Absent or opaque source-to-report traceability further mutes Explainability at G2; teams cannot verify how a figure was produced or which transformation introduced the anomaly, slowing root-cause isolation and extending Time-to-Recover. Where data stewardship roles and SLAs are missing, quality drifts after initial clean-ups, eroding Fidelity across G2–G3 and causing precision/recall of alerts to deteriorate over time. The downstream effects appear at later gates: disputed metrics undermine Authorization at G3 (alerts stall because owners do not trust the signal) and block Executability at G4 (recommendations are not actioned in ERP/MES because inputs are suspect), raising Time-to-Decide and Time-to-Reconfigure.
Typical symptoms reported included duplicate masters, missing values, inconsistent keys across ERP/MES/WMS, and absent lineage fields to explain how figures were produced; the operational result was false positives/negatives, time lost reconciling codes and units, debates over “whose number is right”, and slow genealogy reconstruction with audit gaps. By contrast, firms that mitigated this barrier paired standards with stewardship and access: (i) enforce common identifiers, definitions, and time bases at G1–G2 so numbers align by construction; (ii) expose lineage in dashboards so provenance can be verified in routine reviews; (iii) assign stewards with timeliness/accuracy/completeness SLAs and exception queues to prevent post-pilot drift; and (iv) tie alert definitions to named owners and role-based access so trusted signals are acted within service windows. Where these practices were present, reconciliation debates subsided (Time-to-Detect shortened), root causes were verified faster (Time-to-Recover improved), and alerts translated into system-executed actions rather than emails and spreadsheets (Time-to-Decide and Time-to-Reconfigure fell).
4.3.5. Cost and Resource Constraints
Participants portrayed cost as a binding barrier, especially for SMEs, noting that BDA requires sustained investment beyond licenses—data pipelines and connectors, sensor retrofits, cloud usage, integration/maintenance, and the talent to run them. As one operations director summarized,
“Implementing BDA isn’t just about buying software—you need servers, licenses, consultants, analysts, and training. The costs add up fast, and for SMEs, every dollar has to be justified”.
Hidden and recurring items—data cleansing and stewardship effort, API/middleware upkeep, model monitoring, and subscription renewals—were cited as budget “surprises”. A supply chain manager added,
“People think dashboards work like magic, but the real work is cleaning, standardizing, and managing data—and we didn’t have the resources or staff to handle it consistently”.
In our process account, budget and capacity limits impede which mechanisms can be activated at each adoption gate and thus lengthen decision latency. At data plumbing and descriptive monitoring (G1–G2), underinvestment in pipelines, canonical IDs, and quality stewardship stalls Comparability and Fidelity: metrics disagree across systems and sites, false positives/negatives persist, and teams spend scarce time reconciling rather than detecting—raising Time-to-Detect. Without resources to expose lineage fields or maintain data contracts, Explainability at G2 weakens; root-cause verification slows and Time-to-Recover extends. At predictive alerting (G3), thin headcount and training budgets delay the establishment of named owners, SLAs, and role-based access, so Authorization is missing and alerts circulate as notification without action, stretching Time-to-Decide. Finally, deferring integrations from analytics to ERP/MES/WMS blocks Executability at G4; recommended changes are re-keyed manually, increasing errors and Time-to-Reconfigure.
Teams described “partial” rollouts—basic visualization without risk scoring or prescriptive planning—yielding modest resilience gains. Where firms navigated constraints successfully, they sequenced spend to activate the highest-leverage mechanisms first: (i) start at G1–G2 with low-cost standards (harmonized identifiers and timestamp conventions) and minimal lineage, which sharply reduces reconciliation time; (ii) fund a narrow set of dashboards tied to explicit owners and SLAs at G3, so alerts convert to actions within windows; (iii) add a small number of native connectors to make one or two prescriptive options executable in ERP/MES at G4 (e.g., re-sequencing ticket creation and carrier rebooking), creating visible Time-to-Reconfigure wins; and (iv) institutionalize light-weight stewardship cadences (exception queues and monthly threshold reviews) to preserve Fidelity as scale grows. Deviant cases also showed workable paths: phasing investments by use case (focus on the largest premium-freight lanes or chronic stockout SKUs), shifting CAPEX to OPEX via cloud with spend controls, vendor co-funding/pilot credits, public grants, and gating scale-up on demonstrated ROI (premium-freight reduction, alert-to-action cycle-time cuts, and improved TTR). Where such tactics were absent, adoption stalled in pilots and spreadsheet workarounds persisted.
4.3.6. Lack of Inter-Departmental Collaboration
A persistent barrier was weak collaboration across procurement, production, warehousing, logistics, IT, and sales—manifested in siloed systems, function-specific KPIs, and ad hoc data sharing. As one manager described,
“Our IT department collects the data, but the supply chain team doesn’t always understand how to use it. Production has their own systems and only shares data if asked. So, the data remains underutilized”.
Another added,
“Each department has its own KPIs, its own priorities. There’s no centralized strategy to analyze and act on data collectively”.
In our process account, fragmentation obstructs several generative mechanisms at different adoption gates. During data plumbing and descriptive monitoring (G1–G2), function-specific definitions, codes, and time bases undermine Comparability; the same KPI yields conflicting values by department, so exception signals must be reconciled before they are trusted and Time-to-Detect lengthens. When teams are reluctant or unable to drill from a shared dashboard back to the originating system or owner, cross-functional Explainability at G2 weakens; root-cause verification stalls at organizational boundaries, extending Time-to-Recover. At predictive alerting (G3), unclear handoffs and sequential approvals block Authorization: alerts circulate while departments debate “who owns this?”, producing an observable notification without action pattern that stretches Time-to-Decide. Finally, when scheduling, sourcing, or logistics changes cannot be executed end-to-end because systems and responsibilities are split, Executability at G4 is impaired and Time-to-Reconfigure rises as teams re-key actions or wait for another function to act. Over time, single-function post-mortems discourage shared learning and stewardship, eroding Fidelity (data care, threshold tuning, exception clearing) across G2–G3 and causing earlier timer gains to decay.
Typical symptoms included parallel spreadsheets, reconciliation meetings that delayed action, duplicated safety buffers, and dashboards viewed as “for information only” rather than operating tools. By contrast, firms that mitigated this barrier made collaboration operational: they (i) standardized cross-functional KPIs and identifiers at G1–G2 so numbers align by construction; (ii) instituted joint cadence reviews with drill-through norms so provenance and blockers are verified in the meeting; (iii) assigned named, cross-functional owners for specific alert types with service windows and escalation paths, pre-authorizing bounded actions at G3; and (iv) wired prescriptive options to execute across systems (ERP/MES/WMS/TMS) with clear handoffs, so decisions translate into tickets and transactions at G4. Where these practices were present, reconciliation debates subsided (Time-to-Detect shortened), alerts were acted within SLAs (Time-to-Decide improved), and system changes propagated without manual re-keying (Time-to-Reconfigure fell), with recovery trajectories stabilized by shared, blameless after-action learning (Time-to-Recover improved).
4.3.7. Lack of Awareness and Interest
Several interviewees characterized low digital literacy and weak appreciation of BDA’s strategic value as a fundamental brake on adoption—analytics was often seen as an “IT project” rather than an operating mechanism for solving stockouts, delays, or recovery bottlenecks. As one manager put it,
“There’s a general lack of urgency. Many managers still think data analytics is just for IT or R&D—they don’t realize how it could help solve real supply chain issues like stockouts or delays”.
Another participant noted the utilization gap:
“We had dashboards and analytics software installed, but nobody used them—not because they weren’t useful, but because people didn’t know what to look for or how to interpret the data”.
In our process account, low awareness weakens activation of generative mechanisms across gates. During data plumbing and descriptive monitoring (G1–G2), if leaders and line teams do not connect shared definitions and thresholds to concrete pain points, Comparability is undercut by indifference: numbers are not standardized or reviewed on cadence, so exception signals emerge late and Time-to-Detect lengthens. When reviews do not demand source-to-report drill-through, Explainability at G2 fades—provenance is rarely checked, root causes remain speculative, and Time-to-Recover extends. At predictive alerting (G3), treating analytics as “nice to have” prevents the establishment of named owners, SLAs, and role-based access; Authorization does not materialize, alerts circulate as notification without action, and Time-to-Decide stretches. Finally, without visible sponsorship to fund connectors and playbooks, Executability at G4 is deferred; recommended changes are re-keyed manually (or not at all), raising Time-to-Reconfigure. Over time, the absence of stewardship attention and learning cadences erodes Fidelity across G2–G3, so precision/recall deteriorates and early wins evaporate.
Typical symptoms included “pilot theater” confined to one function, tool access without training, and KPIs that did not reflect resilience outcomes (e.g., no targets for alert lead time or alert-to-action cycle time). Participants who overcame this barrier made value visible and concrete: short executive briefings tied to current pain points; use-case charters that specify decision owners, SLAs, and resilience KPIs; internal roadshows with before/after mini-cases (avoided expedites and TTR improvements); and role-based literacy for planners/buyers/schedulers with local champions coaching day-to-day use. Where these practices were sustained, shared definitions and thresholds were adopted (Time-to-Detect shortened), alerts were acted within service windows (Time-to-Decide improved), and system-executed changes replaced spreadsheet workarounds (Time-to-Reconfigure fell), with post-event reviews strengthening recovery trajectories (Time-to-Recover improved).
4.3.8. Privacy and Security
Privacy and security concerns were a recurring brake on BDA adoption, particularly where sensitive operational data, supplier terms, or customer records are involved. Reluctance centered on cloud/third-party analytics, cross-border data movement, and IP-leakage risks. As one executive noted, “The idea of storing production and supplier data on external servers makes our top management uneasy. If competitors or unauthorized actors gain access, it could compromise our entire supply chain strategy”.
Tight internal access controls further constrained integration with external tools and partner feeds: “Even internally, we’re careful about who sees what data. So when external platforms ask us to integrate supplier or customer records, there’s hesitation. The risk of leakage—even unintentionally—is too high”.
In our process account, over-cautious controls can obstruct several generative mechanisms across adoption gates. During data plumbing and descriptive monitoring (G1–G2), redacted fields, incompatible pseudonyms, or partner refusals to share identifiers limit cross-firm Comparability; signals do not align, reconciliation increases, and Time-to-Detect (TTD) lengthens. When lineage is withheld for sensitivity reasons or audit trails are fragmented across enclaves, Explainability at G2 weakens; teams struggle to verify the provenance of a KPI or alert during incidents, slowing root-cause isolation and extending Time-to-Recover (TtR). At predictive alerting (G3), blanket denial of role-based access prevents Authorization as follows: alerts cannot be routed to people with bounded rights to act (internally or at partners), producing notification without action and stretching Time-to-Decide (TtD). Finally, when security policy forbids API calls to external systems or requires manual approvals for each transaction, Executability at G4 is impaired; recommendations are re-keyed offline, raising Time-to-Reconfigure (TtRcf). Ironically, the absence of clear guardrails also erodes Fidelity over time—teams work around controls with spreadsheets and ad hoc exports, degrading data quality and auditability.
Uncertainty around regulatory requirements (e.g., PDPA obligations, cross-border transfer rules, and retention/minimization) amplified risk aversion and prolonged legal reviews. Where firms made progress, they paired adoption with specific safeguards that enable mechanisms rather than block them: (i) at G1–G2, contract for shared identifiers via hashed/tokenized keys, define field-level minimization, and standardize timestamp/units so cross-firm Comparability is preserved without exposing raw PII or commercial terms; (ii) expose lineage at G2 through privacy-aware drill-through (masking, role-scoped views, and differential access), sustaining Explainability for investigations; (iii) implement least-privilege, role-based access with just-in-time elevation and audited SLAs at G3, so alerts reach named owners with bounded decision rights—delivering Authorization without broad data sprawl; and (iv) use private connectivity (VPC/PrivateLink), customer-managed keys, signed data contracts, and segregated duties to allow system-to-system actions (carrier rebooking, supplier blocks, and targeted holds) at G4, restoring Executability while containing exposure. Some firms adopted clean rooms or aggregated/obfuscated partner feeds (e.g., sharing ETA risk scores without pricing) to enable joint sensing and coordinated response with minimal disclosure.
Absent such measures, interviewees reported prolonged approvals, narrow data scopes, and pilots that stalled over security reviews rather than technical feasibility. By contrast, when privacy/security were engineered as enablers—codified in data contracts, access policies, encryption, and auditability—teams preserved the necessary controls while keeping detection rapid, decisions authorized, and actions executable within service windows.
Cross-Case Explanatory Memo: When Mechanisms Are Absent
Synthesis across cases. To make the analytic logic explicit (see Figure 3; cf. Section 5), we summarize the most recurrent failure pathways and their gate–mechanism–timer effects as follows:
Figure 3.
Governed BDA adoption: resources → adoption gates → mechanisms → dynamic capabilities → resilience outcomes. M4 Fidelity is rendered as a semi-transparent band across G2–G3; dashed connectors to Sensing/Seizing/Reconfiguring indicate its sustaining role on TTD/TtD/TtR.
- 1
- Visibility without ownership (G2 present, M3 Authorization absent): alerts appear but lack named owners/rights, so TtD does not fall; teams revert to e-mails and spreadsheets.
- 2
- Signals without fidelity (G2–G3 present, M4 Fidelity weak): untuned thresholds and missing stewardship produce false positives and alert fatigue; TTD/TtD improvements decay over time.
- 3
- Recommendations without executability (M5 Executability absent at G4): optimization outputs cannot be enacted in ERP/MES/WMS; TtRcf remains high despite analytics.
These deviant pathways are counterfactual tests of the framework: when the relevant mechanism is missing at its gate, the associated timer fails to improve, explaining why tool-rich firms sometimes remain non-resilient.
5. BDA Adoption Framework: A Resource-Governance Overlay
Synthesizing the thematic findings, we advance a governed-adoption framework that explains how manufacturing firms convert Big Data Analytics (BDA) investments into resilient, routinized operations. The framework links organizational resources to a staged adoption sequence, overlays explicit data-governance decisions across that sequence, and maps the resulting dynamic capabilities (sensing, seizing, reconfiguring) to the resilience lifecycle (prediction, response, recovery) and to the benefits materialized at each stage (see Figure 3). In contrast to capability inventories, our contribution is a process theory that specifies five gate-conditional generative mechanisms and the decision-latency outcomes (“resilience timers”) they change: M1 Comparability primarily reduces Time-to-Detect (TTD); M2 Explainability reduces Time-to-Recover (TtR); M3 Authorization reduces Time-to-Decide (TtD); M4 Fidelity sustains gains in TTD/TtD/TtR over time; and M5 Executability reduces Time-to-Reconfigure (TtRcf). A compact Barrier→Gate→Stage matrix situates where each barrier is most salient in our evidence.
Figure 3 and Table 5 jointly serve as the evidence map; they indicate which mechanism activates at which gate and the timer expected to change, while the deviant-case pathways summarized at the end of Section 4 show what fails when the mechanism is absent.
Table 5.
Generative mechanisms, adoption gates, and decision-latency outcomes.
Resource and data-governance overlay. At the core sits a Resource and data- governance overlay that coordinates otherwise ordinary assets into a system-level advantage. We operationalize governance through five mutually reinforcing handles—ownership, standards, stewardship, access, and lineage—each realizing a mechanism at one or more gates. Standards primarily realize M1 Comparability by harmonizing identifiers, definitions, units, and time bases across ERP, MES, IoT, and partner interfaces. Lineage realizes M2 Explainability by making source-to-report transformations auditable and drillable. Ownership and access (with SLAs) realize M3 Authorization by allocating decision rights and routing alerts to accountable roles. Stewardship realizes M4 Fidelity by setting and enforcing quality thresholds and exception queues that prevent drift and alert fatigue. Finally, native connectors/APIs realize M5 Executability by embedding recommendations into operational systems without re-keying. These domains reflect and extend established guidance [88,89]: in our cases, they are not back-office hygiene but the connective tissue that turns technology and talent into dependable, cross-firm action.
Resource orchestration. Technology resources provide interoperable infrastructure (ERP–MES–IoT connectors, cloud data layers, and event streaming) and an analytics stack capable of ingesting high-volume, high-velocity signals. Human resources supply hybrid operations–analytics talent, structured upskilling, and designated steward roles. Managerial and cultural resources align initiatives with resilience KPIs and normalize evidence over intuition in day-to-day routines. Inter-organizational resources (data contracts and partner readiness kits) extend visibility beyond Tier 1 while protecting IP. Financial and roadmap resources stage investment (pilot→scale) and fund the recurring Opex of stewardship and quality. Across firms, these resources yielded durable benefits only when orchestrated by the governance overlay; hybrid talent underperformed without lineage and access, and partner dashboards stalled without shared identifiers and usage rules.
Adoption gates and explicit exit criteria (prose). The adoption sequence unfolds through four gates, with progress governed using concise, auditable exit criteria. We use prescriptive decisioning for Gate 4 (rather than “prescriptive decisioning”) to emphasize governed, role-bounded execution of recommendations via native system hooks. Gate 1 (data plumbing) is considered complete when the vast majority of critical objects (item, supplier, site, and shipment) share harmonized identifiers and timestamp conventions, automated pipelines achieve high success rates, and service levels for accuracy and timeliness are being met; Gate 1 is thus a precondition for later reductions in TTD. Gate 2 (descriptive monitoring) is complete when dashboards consistently support drill-through from KPI to source, stewardship service levels are met for timeliness, accuracy, and completeness, and control-tower cadences are institutionalized; at this point, we expect a measurable decline in TTD relative to baseline as reconciliation effort falls. Gate 3 (predictive alerting) is complete when most alert types have named owners, role-based rights, and SLAs, threshold calibration protocols are in place with tracked precision/recall, and the alert-to-action median falls below a target; here, TtD is expected to decrease as authorization routings take effect. Gate 4 (prescriptive decisioning) is complete when recommended options are natively executable through ERP/MES/WMS APIs, a majority of accepted recommendations auto-create system tickets, and targeted flows exhibit lower TtRcf; at this stage, reconfiguration becomes routine rather than improvised.
Barriers as throttles. Barriers operate as throttles along this path rather than as generic obstacles. Legacy fragmentation and poor integration primarily stall Gate 1; departmental silos and low analytics literacy dampen Gate 2; skill gaps and change resistance appear at Gate 3 when alerts are distrusted or not bound to clear actions; and resistance to prescriptive decisioning constrains Gate 4 when optimization outputs challenge local autonomy. Ambiguous ownership and missing lineage cut across Gates 1–2, eroding trust; cost and resource constraints slow all gates (especially for SMEs); low awareness reduces urgency at Gate 1; and privacy/security concerns limit inter-firm exchange even when technical integration is feasible. These placements and their prevalence counts are summarized in Table 6; we retain a single cross-cutting privacy/security tag in Figure 3 to avoid visual clutter while acknowledging its network-wide scope.
Table 6.
Barrier impact matrix: BDA adoption sequence and resilience outcomes.
Dynamic capabilities and timers. Progress through the gates activates the microfoundations of dynamic capabilities [91]. When Gate 2 is governed by standards and lineage, sensing improves via M1 and M2: weak signals are detected earlier and verified faster, lowering TTD and contributing to TtR through faster drill-through. When Gate 3 is governed by access, ownership, and stewardship, seizing accelerates via M3 and M4: alerts are credible and tied to pre-authorized playbooks, reducing TtD. When Gate 4 is governed by ownership and access with native hooks, reconfiguring becomes repeatable via M5: re-sequencing, supplier switching, and rerouting execute within guardrails, lowering TtRcf. These capabilities map neatly onto prediction, response, and recovery [78], separating visibility (Gate 2) from actionability (Gates 3–4) and showing how governance links both to decision rights and accountability.
Concise, testable propositions (prose). To render the account falsifiable, we state four propositions. First, firms that do not implement standards and lineage (M1 + M2) at Gates 1–2 will not achieve sustained improvements in TTD relative to baseline, irrespective of tool spend. Second, establishing named owners, SLAs, and role-based rights (M3) at Gate 3 reduces median TtD compared to firms without such governance, controlling for alert volume and precision. Third, native connectors from analytics to ERP/MES/WMS (M5) reduce TtRcf for prioritized flows relative to manual re-keying, holding disruption magnitude constant. Fourth, stewardship cadences (M4) moderate performance decay, sustaining improvements in TTD/TtD/TtR from pilot to steady state. These propositions invite large-N or panel tests using the timers as dependent variables.
Deviant-case pathways (failure modes). The cases reveal paired failure modes that clarify boundary conditions: “whose number is right?” (M1 absent at Gates 1–2) delays detection; “it must be bad data” (M2 absent at Gate 2) slows recovery; “notification without action” (M3 absent at Gate 3) prolongs decision latency; “pilot rot” (M4 absent at Gates 2–3) causes performance decay; and “recommendations on paper” (M5 absent at Gate 4) slows reconfiguration despite analytics.
Scope and generalization. Claims are bounded to analytic generalization from a single-country, multi-subsector qualitative sample; we do not claim statistical validation. The framework offers measurable constructs—mechanisms, gate exit criteria, and timers—that future studies can test in other settings.
6. Discussion
This study examined the adoption of BDA for resilient supply-chain operations in manufacturing by tracing how firms mobilize and coordinate resources, make governance choices, and configure cross-firm arrangements so that analytics becomes embedded in day-to-day work. Our focus departs from capability-centric accounts by treating BDA as a staged, governance-intensive process rather than a static end state. Interpreting the evidence through the resource-based view and dynamic capabilities, and engaging information-processing and relational governance perspectives where inter-organizational coordination is salient, we explain how specific adoption sequences yield resilience outcomes and why adoptions stall even when tools are available [91,125,126,127]. Figure 3 and Table 5 and Table 6 serve as the evidence map: they show which mechanism activates at which gate, which timer is expected to change, and where barriers throttle progress; the deviant pathways summarized at the end of Section 4 provide counterfactual support by showing what fails when a mechanism is absent.
Across cases, resilient outcomes were associated with a configuration of resources rather than any single asset: interoperable technological infrastructure (ERP–MES–IoT integration, cloud data layers, and connectors), skilled human capital with hybrid operations–analytics profiles, leadership sponsorship that protects experimentation and mandates use, and purposeful external interfaces that extend visibility beyond Tier 1. What differentiated successful adoptions was a data-governance overlay—clarified ownership, shared standards, active stewardship, role-based access, and documented lineage—applied across the resource bundle and the early adoption gates. This overlay acted as the connective tissue that converted technology and talent into routinized sensing, decision, and reconfiguration under turbulence, complementing prior work that links infrastructure and talent to visibility and performance [6,19,23,66,86,128]. Where ownership was explicit, identifiers and definitions were shared, stewards maintained data products and quality SLAs, access was least-privilege and auditable, and lineage showed how data moved and changed, managers reported earlier anomaly detection, faster consensus on “one source of truth”, and fewer disputes about whose numbers were authoritative. Importantly, gate exit criteria (e.g., drill-through working end-to-end at G2; named owners/SLAs at G3; and native executability at G4) coincided with step-changes in the relevant timers (TTD, TtD, TtRcf, and TtR), making the process account observable rather than rhetorical.
Viewing adoption as progression through gates—data plumbing, descriptive monitoring, predictive sensing, and prescriptive playbooks—clarifies the link to dynamic capabilities. When governance is in place, sensing strengthens via M1 (Comparability) and M2 (Explainability): cross-site signals are comparable and verifiable, lowering TTD and contributing to TtR through faster drill-through. Seizing accelerates via M3 (Authorization) and M4 (Fidelity): predictive alerts are credible and bound to pre-authorized, role-specific actions, reducing TtD and preventing performance decay through stewardship cadences. Reconfiguring becomes repeatable via M5 (Executability), supported by M4: rescheduling, bill-of-materials substitutions, and rerouting are encoded in playbooks and system hooks, lowering TtRcf. Mapping these routines to the resilience lifecycle shows a coherent chain from prediction (early warning, supplier-risk sensing, demand sensing, and traceability) to response (dynamic capacity allocation, rapid reprioritization and rerouting, and collaborative moves with partners) and into recovery (quicker root-cause isolation via lineage, targeted recalls, backlog clearance, and parameter resets), consistent with visibility- and flexibility-based pathways highlighted in recent work [86,129]. Relative to quantitative studies that document associations between “BDA capability” and resilience, our contribution is to specify the adoption sequences and governance decisions that convert tools into codified, resilient routines—e.g., threshold governance that links specific alerts to authorized responses; lineage-backed accountability that compresses investigation time; and synchronized replanning rituals that coordinate production, logistics, and procurement in hours rather than days.
Rereading barriers through the governance lens helps explain negative and deviant cases. Data-quality and integration problems were not merely technical; they reflected missing standards and lineage, which eroded confidence during time-sensitive decisions [123]. Talent shortages were as much about thin stewardship capacity as about scarce data scientists, leaving no one formally accountable for maintaining identifiers, pipelines, and quality thresholds [55]. Cultural resistance frequently stemmed from unresolved questions about ownership and rights to act on data, echoing evidence that cultural alignment conditions effective analytics use [130]. Cost constraints—especially for SMEs—were amplified by underbudgeted governance work (cleaning, curation, monitoring), explaining why many implementations plateau at descriptive dashboards [91]. Privacy and security hesitancy, particularly in cross-border supplier networks, surfaced where access and usage rights were ambiguous; progress occurred where firms instituted least-privilege, auditable access and negotiated data-sharing terms that balanced confidentiality with responsiveness [131]. Boundary conditions mattered: smaller, less digitally mature firms faced sharper Opex and retention constraints and therefore benefited from staged roadmaps and managed services, while upstream suppliers with limited bargaining power struggled to access downstream demand signals unless focal firms sponsored connectors, standards, and access governance.
Inter-organizationally, our evidence is consistent with information-processing theory: as uncertainty and interdependence rise, networks increase information-processing capacity by adding lateral ties, shared databases, and real-time coordination [125]. Collaborative visibility translated into joint action when complemented by relational governance—trust and joint problem solving layered on top of contracts—which reduced opportunism and accelerated exchange of sensitive signals [126,127]. Where dashboards were introduced without shared identifiers, clear access rights, named stewards on both sides, and auditable lineage for supplier telemetry, partners debated data credibility and moved slowly; where these governance elements were explicit, partners synchronized responses and stabilized service levels. The upshot is a reframing of the dominant narrative from “BDA capability → SCRes” toward “governed adoption sequence→ routinized resilience”, aligning RBV’s emphasis on resource complementarities with dynamic capabilities’ emphasis on change-oriented routines. The deviant pathways (visibility without ownership, signals without fidelity, recommendations without executability) reinforce the counterfactual logic: when a mechanism is missing at its gate, the associated timer does not improve, which explains why tool-rich firms sometimes remain non-resilient.
6.1. Theoretical Implications
The findings extend theory in three ways. First, they specify a resource configuration for BDA adoption—infrastructure, hybrid talent, leadership sponsorship, partner interfaces—coordinated by a data-governance overlay that orchestrates complementarities and enables progression through adoption gates; this explains why ordinary assets can yield system-level advantage under turbulence when governed as a bundle [88,89].
Second, they characterize resource and data governance as a quasi-public good in supply networks: shared identifiers, definitions, and lineage expand network information-processing capacity even when underlying datasets remain private, clarifying why collaborative visibility requires explicit access/usage rules and named stewardship across firm boundaries. This aligns with network governance and information-commons logics wherein rule clarity and auditability enable contribution without full centralization, thereby accelerating joint sensing and coordinated response.
Third, they localize dynamic capabilities at actionable control points: sensing is gated by standards and lineage (trusted early warnings), seizing by role-based access and playbooks (authorized, timely action on alerts), and reconfiguring by stewardship and native executability (repeatable rescheduling and substitutions). These micro-linkages yield the following testable propositions: firms lacking M1 + M2 at G1–G2 should not see durable TTD improvements; firms instituting M3 at G3 should observe lower TtD; firms implementing M5 at G4 should reduce TtRcf; and M4 should sustain gains over time. We thus convert the link to dynamic capabilities from descriptive mapping to a falsifiable, mechanism-and-timer account.
6.2. Practical/Managerial Implications
Managers should treat the governance overlay as the first deliverable in any BDA adoption roadmap. A pragmatic sequence begins with a readiness scan that inventories critical objects (items, suppliers, sites, and assets), makes ownership explicit, and establishes baseline accuracy, latency, and resilience timers—Time-to-Detect, Time-to-Decide, Time-to-Reconfigure, and Time-to-Recover [78]. Early sprints should formalize identifiers and definitions, assign stewards with quality SLAs, and implement least-privilege, auditable access, and lineage logging. Once descriptive monitoring is reliable and used in cross-functional meetings, firms can add predictive sensing tied to pre-authorized response bundles and, later, embed prescriptive routines in SOPs for reprioritization, bill-of-materials substitutions, and rerouting. SMEs can reduce cost and risk by adopting managed connectors, low-code data pipelines, and shared “scenario kits” that lower cognitive load for planners; talent scarcity can be mitigated by building hybrid operations–analytics roles and rotating planners through analytics squads. To address security and confidentiality concerns while enabling network-wide sensing, firms should adopt zero-trust architectures, consider federated analytics/learning that keep data local while sharing model parameters, and experiment with data-trust style arrangements that clarify rights and obligations for shared industrial data [132,133,134]. Policymakers and industry bodies can accelerate diffusion by issuing vendor-neutral identifier and metadata standards with conformance tests, subsidizing connector toolkits and steward roles for SMEs, and promoting auditable, least-privilege exchanges in sectoral data spaces.
Overall, the governed-adoption sequence—made visible through gate exit criteria and timer movements—converts BDA investments into routinized sensing, seizing, and reconfiguring across the resilience lifecycle, shifting evaluation from perceptions to observable operational metrics. We bound these claims to analytic generalization as follows: our qualitative evidence explicates mechanisms and offers propositions; future large-N and panel studies can test the predicted timer effects and intervention impacts at scale.
7. Limitations and Future Research
Our aim is analytic—not statistical—generalization. The evidence comes from a single-country (Malaysia), manufacturing-focused context with a purposive/snowball sample that tilts toward senior informants. We sought to explicate how governance mechanisms operate across adoption gates and which decision-latency “resilience timers” they move. We observed code saturation on the five mechanisms, maintained an audit trail and shared codebook, resolved coding through analyst consensus, and used member checks to reduce recall bias; nevertheless, we do not claim external validity of effect sizes.
Future work should test the propositions in § Section 5 using designs that place timers as dependent variables. First, longitudinal or panel studies can follow gate “go-lives” and use event-study or difference-in-differences around G2/G3/G4 milestones to estimate impacts on TTD, TtD, TtRcf, and TtR. Second, intervention studies can randomize or phase specific governance handles—e.g., shared identifiers (M1), lineage drill-through (M2), named-owner SLAs (M3), stewardship cadences (M4), or native connectors (M5)—to isolate causal effects at each gate. Third, comparative designs (SMEs vs. large firms; focal OEMs vs. upstream suppliers; Malaysia vs. other ecosystems) can surface boundary conditions tied to digital maturity, bargaining power, and regulatory regimes. Fourth, inter-firm governance merits closer analysis: access/usage contracts, data trusts, federated analytics/learning, and clean-room arrangements may differentially enable collaborative visibility while managing confidentiality. Mixed-methods work that pairs process data (pipeline success logs, lineage traces, and access audits) with outcomes can validate control points in the model (e.g., whether a common item-ID standard or role-based routing measurably shortens TtD). Design-science studies can prototype low-cost stewardship mechanisms for SMEs and test adoption frictions experimentally.
8. Conclusions
In volatile environments, BDA strengthens resilience not because firms own more tools, but because they adopt them under governance. Explicit choices on ownership, standards, stewardship, access, and lineage transform infrastructure and talent into reliable sensing, decisive seizing, and repeatable reconfiguring across the resilience lifecycle. Our mechanism-and-timer account specifies how M1+M2 improve TTD (and contribute to TtR), M3 reduces TtD, M5 reduces TtRcf, and M4 sustains gains over time. The deviant pathways—visibility without ownership, signals without fidelity, recommendations without executability—provide counterfactual logic for why tool-rich firms can remain non-resilient when mechanisms are missing at their gates.
Three practice takeaways follow. First, the overlooked Opex of stewardship is the hidden lever that sustains adoption beyond pilots; budgeting for it is as important as licenses. Second, predictive value materializes only when alerts are bound to roles and playbooks via access rules; otherwise, dashboards proliferate without shortening TtD. Third, in multi-tier networks, minimal common standards and auditable lineage act as a shared enabler; they convert collaborative visibility into synchronized action while preserving confidentiality. For managers, the staged roadmap and gate exit criteria provide a practical sequence from data plumbing to prescriptive SOPs, with resilience timers to track progress. For scholars, positioning governance as an overlay clarifies how resource configurations become dynamic capabilities that deliver prediction, response, and recovery—recasting BDA adoption as a governed sequence that makes resilience routine rather than aspirational.
Author Contributions
G.Y.: Conceptualization, Methodology, Formal analysis, and Writing—original draft, review, and editing; L.A.: Supervision, Formal analysis, and Writing—review and editing; A.O.O.: Supervision, Formal analysis, and Writing—review and editing. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Ministry of Higher Education (MOHE) Malaysia under the Fundamental Research Grant Scheme (FRGS), Malaysia (FRGS/1/2022/SS01/MMU/02/4).
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Multimedia University (MMU) Research Ethics Committee (protocol code: EA0192024, date of approval: 4 June 2024).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Semi-Structured Interview Guide
Appendix A.1. Intro Script (2–3 min)
Thank you for agreeing to participate. This interview explores how your organization adopts Big Data Analytics (BDA) in supply chain operations and how this relates to resilience (visibility, responsiveness, recovery). With your permission, we will audio-record for accuracy. Your name, organization, and any identifying details will not be reported. You may decline to answer any question and can stop at any time. Do you consent to proceed and to audio recording?
Appendix A.2. Shared Understanding (No Leading, Optional)
We use BDA to mean the use of data engineering and advanced analytics (e.g., descriptive/predictive/prescriptive methods, ML/AI) embedded in supply-chain decisions. Supply-chain resilience (SCRes) refers to the capacity to anticipate, absorb, and recover from disruptions. Please feel free to use your own terminology; we can clarify as we go.
Appendix A.3. Guide Structure
Questions are organized to address the following research questions: RQ1 (Resources), RQ2 (Challenges), and RQ3 (Benefits/resilience outcomes).
- Section A. Background and role (warm-up)
- A1.
- Please describe your current role and responsibilities. (Probe: decision rights; analytics involvement) Context
- A2.
- Briefly outline your organization’s supply chain (key products, nodes/tiers, major partners). Context
- A3.
- What are the main data/analytics systems or tools used in supply-chain processes? (Probe: ERP/MES/SCM suites; data lakes; visual analytics) Context
- Section B. Adoption storyline
- B1.
- Can you walk me through a specific BDA initiative in supply chain from initiation to current status? (Probe: trigger, scope, milestones) RQ1/RQ2
- B2.
- Which stakeholders were involved and how were responsibilities divided? (Probe: cross-functional team; partners) RQ1
- Section C. Organizational resources (human, data, tech, and governance) RQ1
- C1.
- What people resources were essential (skills, roles, training)? (Probe: domain experts, data engineers, product owners; internal vs. external)
- C2.
- What data resources were critical (sources, quality, standards, stewardship)? (Probe: cross-tier sharing; master data)
- C3.
- What technology resources were required (platforms, integration, OT/IT convergence)? (Probe: latency/throughput; cloud/edge)
- C4.
- What governance arrangements helped (policies, decision rights, funding, KPIs)? (Probe: privacy/security; interoperability agreements)
- C5.
- How were these resources mobilized and sequenced over time? (Probe: quick wins; capability building; hiring vs. upskilling)
- Section D. Barriers and workarounds (interdependencies) RQ2
- D1.
- What were the main adoption barriers? (Prompts: legacy integration; data quality; skills; vendor lock-in; standards; change resistance; ROI)
- D2.
- Which barriers were interdependent (one causing or amplifying another)? (Probe: examples of “barrier cascades”)
- D3.
- How did you address or sequence these? (Probe: pilots; governance fixes; contracts with partners; minimum viable datasets)
- D4.
- What would you do differently next time? (Probe: resourcing, stakeholder engagement, architecture)
- Section E. Resilience mechanisms and outcomes RQ3
- E1.
- Where, if anywhere, did BDA improve visibility? (Probe: earlier signals; inventory/flow transparency; supplier risk)
- E2.
- Where did BDA improve responsiveness/reconfiguration? (Probe: scenario planning; S&OP; dynamic allocation; expediting)
- E3.
- Where did BDA shorten recovery time? (Probe: incident post-mortems; learning loops; automation handoffs)
- E4.
- Could you describe a recent disruption and how BDA changed the response? (Probe: before/after; evidence; limits)
- E5.
- Any unintended consequences or risks introduced by BDA? (Probe: model brittleness; bias; over-reliance)
- Section F. Cross-tier collaboration and scaling
- F1.
- How did partner data-sharing and readiness affect adoption? (Probe: contracts, standards, incentives) RQ1/RQ2
- F2.
- What governance or operating model supported scaling across plants/tiers? (Probe: centers of excellence; playbooks; funding)
- Section G. Wrap-up
- G1.
- Top three lessons learned for adopting BDA to support resilience?
- G2.
- If you could advise a peer starting now, what would you prioritize first?
Closing: Is there anything we did not ask that you consider important? May we contact you later to clarify points if needed?
References
- Su, X.; Zeng, W.; Zheng, M.; Jiang, X.; Lin, W.; Xu, A. Big data analytics capabilities and organizational performance: The mediating effect of dual innovations. Eur. J. Innov. Manag. 2022, 25, 1142–1160. [Google Scholar] [CrossRef]
- Babalghaith, R.; Aljarallah, A. Factors Affecting Big Data Analytics Adoption in Small and Medium Enterprises. Inf. Syst. Front. 2024, 26, 2165–2187. [Google Scholar] [CrossRef]
- Liu, Y.; Xu, X. Industry 4.0 and cloud manufacturing: A comparative analysis. J. Manuf. Sci. Eng. 2017, 139, 034701. [Google Scholar] [CrossRef]
- Sagiroglu, S.; Sinanc, D. Big data: A review. In Proceedings of the 2013 International Conference on Collaboration Technologies and Systems (CTS), San Diego, CA, USA, 20–24 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 42–47. [Google Scholar]
- Grover, V.; Chiang, R.H.; Liang, T.P.; Zhang, D. Creating strategic business value from big data analytics: A research framework. J. Manag. Inf. Syst. 2018, 35, 388–423. [Google Scholar] [CrossRef]
- Wamba, S.; Gunasekaran, A.; Akter, S.; Ren, S.; Dubey, R.; Childe, S. Big data analytics and firm performance: Effects of dynamic capabilities. J. Bus. Res. 2020, 120, 328–337. [Google Scholar] [CrossRef]
- Chen, D.Q.; Preston, D.S.; Swink, M. How big data analytics affects supply chain decision-making: An empirical analysis. J. Assoc. Inf. Syst. 2021, 22, 1224–1244. [Google Scholar] [CrossRef]
- Goes, P.B. Editor’s Comments: Big Data and IS Research. Mis Q. 2014, 38, iii–viii. [Google Scholar] [CrossRef]
- Ivanov, D.; Dolgui, A. Viability of intertwined supply networks: Extending the supply chain resilience angles towards survivability. A position paper motivated by COVID-19 outbreak. Int. J. Prod. Res. 2020, 58, 2904–2915. [Google Scholar] [CrossRef]
- Dubey, R.; Gunasekaran, A.; Childe, S.J.; Bryde, D.J.; Giannakis, M.; Foropon, C.; Roubaud, D.; Hazen, B.T. Big data analytics and artificial intelligence pathway to operational performance under the effects of entrepreneurial orientation and environmental dynamism: A study of manufacturing organisations. Int. J. Prod. Econ. 2020, 226, 107599. [Google Scholar] [CrossRef]
- Pettit, T.J.; Croxton, K.L.; Fiksel, J. The evolution of resilience in supply chain management: A retrospective on ensuring supply chain resilience. J. Bus. Logist. 2019, 40, 56–65. [Google Scholar] [CrossRef]
- Bahrami, M.; Shokouhyar, S. The role of big data analytics capabilities in bolstering supply chain resilience and firm performance: A dynamic capability view. Inf. Technol. People 2022, 35, 1621–1651. [Google Scholar] [CrossRef]
- Maheshwari, S.; Gautam, P.; Jaggi, C.K. Role of Big Data Analytics in supply chain management: Current trends and future perspectives. Int. J. Prod. Res. 2021, 59, 1875–1900. [Google Scholar] [CrossRef]
- Bag, S.; Dhamija, P.; Luthra, S.; Huisingh, D. How big data analytics can help manufacturing companies strengthen supply chain resilience in the context of the COVID-19 pandemic. Int. J. Logist. Manag. 2023, 34, 1141–1164. [Google Scholar] [CrossRef]
- JayaLakshmi, G.; Pandey, D.; Pandey, B.K.; Kaur, P.; Mahajan, D.A.; Dari, S.S. Smart big data collection for intelligent supply chain improvement. In AI and Machine Learning Impacts in Intelligent Supply Chain; IGI Global Scientific Publishing: Hershey, PA, USA, 2024; pp. 180–195. [Google Scholar]
- Liao, H.Y.; Hsu, P.C.; Wang, Y.C. Information systems adoption and knowledge performance: Roles of adoption, capabilities, and absorptive capacity. Heliyon 2023, 9, e12847. [Google Scholar] [CrossRef]
- Rogers, D.L. The Digital Transformation Playbook: Rethink your Business for the Digital Age; Columbia University Press: New York, NY, USA, 2016. [Google Scholar]
- Srinivasan, R.; Swink, M. An investigation of visibility and flexibility as complements to supply chain analytics: An organizational information processing theory perspective. Prod. Oper. Manag. 2018, 27, 1849–1867. [Google Scholar] [CrossRef]
- Akter, S.; Bandara, R.; Hani, U.; Wamba, S.F.; Foropon, C.; Papadopoulos, T. Analytics-based decision-making for service systems: A qualitative study and agenda for future research. Int. J. Inf. Manag. 2019, 48, 85–95. [Google Scholar] [CrossRef]
- Mikalef, P.; Pappas, I.O.; Krogstie, J.; Giannakos, M. Big data analytics capabilities: A systematic literature review and research agenda. Inf. Syst.-Bus. Manag. 2018, 16, 547–578. [Google Scholar] [CrossRef]
- Agrawal, K. Investigating the Determinants of Big Data Analytics (BDA) Adoption in Asian Emerging Economies. Master’s Thesis, Aalto University School of Business, Helsinki, Finland, 2015. [Google Scholar]
- Ramanathan, R.; Philpott, E.; Duan, Y.; Cao, G. Adoption of business analytics and impact on performance: A qualitative study in retail. Prod. Plan. Control 2017, 28, 985–998. [Google Scholar] [CrossRef]
- Maroufkhani, P.; Tseng, M.L.; Iranmanesh, M.; Ismail, W.K.W.; Khalid, H. Big data analytics adoption: Determinants and performances among small to medium-sized enterprises. Int. J. Inf. Manag. 2020, 54, 102190. [Google Scholar] [CrossRef]
- Tajul Urus, S.; Rahmat, F.; Othman, I.W.; Syed Mustapha Nazri, S.N.F.; Abdul Rasit, Z. Application of the technology-organization-environment framework on big data analytics deployment in manufacturing and service. Asia-Pac. Manag. Account. J. (APMAJ) 2024, 19, 57–92. [Google Scholar]
- Xu, J.; Pero, M.E.P. A resource orchestration perspective of organizational big data analytics adoption: Evidence from supply chain planning. Int. J. Phys. Distrib. Logist. Manag. 2023, 53, 71–97. [Google Scholar] [CrossRef]
- Maroufkhani, P.; Iranmanesh, M.; Ghobakhloo, M. Determinants of big data analytics adoption in small and medium-sized enterprises (SMEs). Ind. Manag. Data Syst. 2023, 123, 278–301. [Google Scholar] [CrossRef]
- Raj, R.; Kumar, S.; Jeyaraj, A. Antecedents and outcomes of big data adoption in supply chain: A meta-analysis. Am. Bus. Rev. 2024, 27, 15. [Google Scholar] [CrossRef]
- Babalghaith, A.; Aljarallah, N. Big data analytics adoption in SMEs: Evidence from Saudi Arabia. Inf. Syst. Front. 2024, 26, 923–940. [Google Scholar] [CrossRef]
- Huong, T.T.; Azmat, F.; Hadeed, H. Exploring big data analytics adoption for sustainable manufacturing supply chains: Insights from a TOE-guided systematic review. Clean. Logist. Supply Chain. 2025, 14, 100256. [Google Scholar] [CrossRef]
- Vafaei-Zadeh, A.; Thanabalan, P.; Hanifah, H.; Ramayah, T. Big data analytics adoption and supply chain resilience: Evidence from Malaysian manufacturing firms. Inf. Syst. Front. 2025, 27, 455–476. [Google Scholar]
- Anwar, M.; Zong, Z.; Mendiratta, A.; Yaqub, M.Z. Antecedents of big data analytics adoption and its impact on decision quality and environmental performance of SMEs in the recycling sector. Technol. Forecast. Soc. Change 2024, 203, 123468. [Google Scholar] [CrossRef]
- Waqar, M.; Paracha, Z. Big data analytics adoption in developing economies: Empirical evidence from Pakistan’s private sector. Foresight 2024, 26, 1–22. [Google Scholar] [CrossRef]
- El-Haddadeh, R.; Weerakkody, V.; Irani, Z.; Fosso, S.W.; Babar, M.A. Big data analytics adoption and value creation in supply chains: A resource-based view and machine learning approach. Bus. Process Manag. J. 2025, 31, 37–56. [Google Scholar] [CrossRef]
- Alorfi, A.; Alsaadi, A. Barriers to adopting big data analytics in manufacturing supply chains: An interpretive structural modelling approach. Systems 2025, 13, 250. [Google Scholar] [CrossRef]
- Aldossari, A.; Mokhtar, U.A.; Ghani, A.T.A. Empowering Saudi Manufacturing Small and Medium Enterprises: A Framework for Big Data Analytics Adoption and Its Impact on Decision-Making. SAGE Open 2025, 15, 21582440251369162. [Google Scholar] [CrossRef]
- Iftikhar, A.; Ali, I.; Arslan, A.; Tarba, S. Digital innovation, data analytics, and supply chain resiliency: A bibliometric-based systematic literature review. Ann. Oper. Res. 2024, 333, 825–848. [Google Scholar] [CrossRef]
- Al-shanableh, A.; Alzyoud, M.; Alomar, S.; Kilani, Y.; Nashnush, E.; Al-Hawary, S.; Al-Momani, A. Factors affecting big data analytics adoption in SME supply chains: Evidence from Jordan. Int. J. Data Netw. Sci. 2024, 8, 321–332. [Google Scholar] [CrossRef]
- Hamed, T.; Dandan, S.M.; Farah, A.A.; Barakat, S.A. Organisational factors affecting big data analytics adoption in supply chain operations: Evidence from Saudi Arabia. J. Transp. Supply Chain. Manag. 2024, 18, a842. [Google Scholar] [CrossRef]
- Waqar, J.; Paracha, O.S. Antecedents of big data analytics (BDA) adoption in private firms: A sequential explanatory approach. Foresight 2024, 26, 805–843. [Google Scholar] [CrossRef]
- Xu, J.; Pero, M.; Fabbri, M. Unfolding the link between big data analytics and supply chain planning. Technol. Forecast. Soc. Change 2023, 196, 122805. [Google Scholar] [CrossRef]
- Cui, Y.; Kara, S.; Chan, F. Manufacturing big data ecosystem: A systematic literature review. J. Manuf. Syst. 2020, 54, 101861. [Google Scholar] [CrossRef]
- Zamani, E.D.; Smyth, C.; Gupta, S.; Dennehy, D. Artificial intelligence and big data analytics for supply chain resilience: A systematic literature review. Ann. Oper. Res. 2022, 322, 119–149. [Google Scholar] [CrossRef]
- Aldossari, S.; Alharbi, F.; Alzahrani, F. Factors Influencing the Adoption of Big Data Analytics: A Systematic Literature Review on SMEs. SAGE Open 2023, 13, 21582440231217902. [Google Scholar] [CrossRef]
- Moktadir, M.A.; Kumar, A.; Ali, S.M.; Paul, S.; Sultana, R.; Rehman Khan, S. Barriers to big data analytics in manufacturing supply chains: A case study from Bangladesh. Comput. Ind. Eng. 2019, 128, 1063–1075. [Google Scholar] [CrossRef]
- Raut, R.; Narwane, V.; Kumar Mangla, S.; Yadav, V.S.; Narkhede, B.E.; Luthra, S. Unlocking causal relations of barriers to big data analytics in manufacturing firms. Ind. Manag. Data Syst. 2021, 121, 1939–1968. [Google Scholar] [CrossRef]
- Dubey, R.; Bryde, D.J.; Blome, C.; Gunasekaran, A.; Papadopoulos, T. Empirical investigation of data analytics capability and organizational flexibility as complements to supply chain resilience. Int. J. Prod. Res. 2021, 59, 110–128. [Google Scholar] [CrossRef]
- Jiang, B.; Xu, L.; Ahn, H. Antecedents of predictive analytics capability for supply chain resilience. Int. J. Prod. Econ. 2024, 257, 108787. [Google Scholar]
- Addo-Tenkorang, R.; Helo, P.T. Big data applications in operations/supply-chain management: A literature review. Comput. Ind. Eng. 2016, 101, 528–543. [Google Scholar] [CrossRef]
- Lau, R.Y.; Zhao, J.L.; Chen, G.; Guo, X. Big data commerce. Inf. Manag. 2016, 53, 929–943. [Google Scholar] [CrossRef]
- Mahmoudi, A.; Deng, X.; Javed, S.A.; Yuan, J. Large-scale multiple criteria decision-making with missing values: Project selection through TOPSIS-OPA. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 9341–9362. [Google Scholar] [CrossRef]
- Gantz, J.; Reinsel, D. The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East. IDC 2012, 2007, 1–16. [Google Scholar]
- Popovič, A.; Hackney, R.; Tassabehji, R.; Castelli, M. The impact of big data analytics on firms’ high value business performance. Inf. Syst. Front. 2018, 20, 209–222. [Google Scholar] [CrossRef]
- Choi, T.M.; Wallace, S.W.; Wang, Y. Big data analytics in operations management. Prod. Oper. Manag. 2018, 27, 1868–1883. [Google Scholar] [CrossRef]
- Alidrisi, H. Measuring the environmental maturity of the supply chain finance: A big data-based multi-criteria perspective. Logistics 2021, 5, 22. [Google Scholar] [CrossRef]
- Gunasekaran, A.; Papadopoulos, T.; Dubey, R.; Wamba, S.F.; Childe, S.J.; Hazen, B.; Akter, S. Big data and predictive analytics for supply chain and organizational performance. J. Bus. Res. 2017, 70, 308–317. [Google Scholar] [CrossRef]
- Wamba, S.F.; Dubey, R.; Gunasekaran, A.; Akter, S. The performance effects of big data analytics and supply chain ambidexterity: The moderating effect of environmental dynamism. Int. J. Prod. Econ. 2020, 222, 107498. [Google Scholar] [CrossRef]
- Nguyen, T.; Li, Z.; Spiegler, V.; Ieromonachou, P.; Lin, Y. Big data analytics in supply chain management: A state-of-the-art literature review. Comput. Oper. Res. 2018, 98, 254–264. [Google Scholar] [CrossRef]
- Lee, I. Big Data Analytics in Supply Chain Management: A Review. Appl. Sci. 2022, 12, 17. [Google Scholar]
- Wang, G.; Gunasekaran, A.; Ngai, E.W.; Papadopoulos, T. Big data analytics in logistics and supply chain management: Certain investigations for research and applications. Int. J. Prod. Econ. 2016, 176, 98–110. [Google Scholar] [CrossRef]
- Tiwari, S.; Wee, H.M.; Daryanto, Y. Big data analytics in supply chain management between 2010 and 2016: Insights to industries. Comput. Ind. Eng. 2018, 115, 319–330. [Google Scholar] [CrossRef]
- Demchenko, Y.; Grosso, P.; Membrey, P. Addressing Big Data Issues in Scientific Data Infrastructure. In Proceedings of the 2013 International Conference on Collaboration Technologies and Systems (CTS), San Diego, CA, USA, 20–24 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 48–55. [Google Scholar]
- Jhang-Li, J.H.; Chang, C.W. Analyzing the operation of cloud supply chain: Adoption barriers and business model. Electron. Commer. Res. 2017, 17, 627–660. [Google Scholar] [CrossRef]
- Gamage, P. New development: Leveraging ‘big data’ analytics in the public sector. Public Money Manag. 2016, 36, 385–390. [Google Scholar] [CrossRef]
- Kache, F.; Seuring, S. Challenges and opportunities of digital information at the intersection of Big Data Analytics and supply chain management. Int. J. Oper. Prod. Manag. 2017, 37, 10–36. [Google Scholar] [CrossRef]
- Ren, S.J.; Wamba, S.F.; Akter, S.; Dubey, R.; Childe, S.J. Modelling quality dynamics, business value and firm performance in a big data analytics environment. Int. J. Prod. Res. 2017, 55, 5011–5026. [Google Scholar] [CrossRef]
- Yu, W.; Zhao, G.; Liu, Q.; Song, Y. Role of big data analytics capability in developing integrated hospital supply chains and operational flexibility: An organizational information processing theory perspective. Technol. Forecast. Soc. Change 2021, 163, 120417. [Google Scholar] [CrossRef]
- Legenvre, H.; Hameri, A.P. The emergence of data sharing along complex supply chains. Int. J. Oper. Prod. Manag. 2024, 44, 292–297. [Google Scholar] [CrossRef]
- Erboz, G.; Yumurtacı Hüseyinoğlu, I.Ö.; Szegedi, Z. The partial mediating role of supply chain integration between Industry 4.0 and supply chain performance. Supply Chain. Manag. Int. J. 2022, 27, 538–559. [Google Scholar] [CrossRef]
- Wamba, S.F.; Akter, S.; Edwards, A.; Chopin, G.; Gnanzou, D. How ‘big data’can make big impact: Findings from a systematic review and a longitudinal case study. Int. J. Prod. Econ. 2015, 165, 234–246. [Google Scholar] [CrossRef]
- Zhao, K.; Zuo, Z.; Blackhurst, J.V. Modelling supply chain adaptation for disruptions: An empirically grounded complex adaptive systems approach. J. Oper. Manag. 2019, 65, 190–212. [Google Scholar] [CrossRef]
- Scholten, K.; Sharkey Scott, P.; Fynes, B. Building routines for non-routine events: Supply chain resilience learning mechanisms and their antecedents. Supply Chain. Manag. Int. J. 2019, 24, 430–442. [Google Scholar] [CrossRef]
- Ribeiro, J.P.; Barbosa-Povoa, A. Supply Chain Resilience: Definitions and quantitative modelling approaches—A literature review. Comput. Ind. Eng. 2018, 115, 109–122. [Google Scholar] [CrossRef]
- Liu, H.; Lu, F.; Shi, B.; Hu, Y.; Li, M. Big data and supply chain resilience: Role of decision-making technology. Manag. Decis. 2023, 61, 2792–2808. [Google Scholar] [CrossRef]
- Christopher, M.; Peck, H. Building the resilient supply chain. Int. J. Logist. Manag. 2004, 15, 1–13. [Google Scholar] [CrossRef]
- Sheffi, Y.; Rice, J.B. A supply chain view of the resilient enterprise. MIT Sloan Manag. Rev. 2005, 47, 41–48. [Google Scholar]
- Dolgui, A.; Ivanov, D.; Sokolov, B. Ripple effect in the supply chain: An analysis and recent literature. Int. J. Prod. Res. 2018, 56, 414–430. [Google Scholar] [CrossRef]
- Ivanov, D. Digital twins, the ripple effect, and resilience in supply chains: Comparative analysis. IFAC-PapersOnLine 2019, 52, 1672–1678. [Google Scholar] [CrossRef]
- Simchi-Levi, D.; Schmidt, W.; Wei, Y. From superstorms to factory fires: Managing unpredictable supply-chain disruptions. Harv. Bus. Rev. 2014, 92, 96–101. [Google Scholar]
- Simchi-Levi, D. Find the Weak Link in Your Supply Chain. Harv. Bus. Rev. 2015, 93, 72–80. [Google Scholar]
- Sahlmueller, T.; Hellingrath, B. Measuring the resilience of supply chain networks. In Proceedings of the 19th International Conference on Information Systems for Crisis Response and Management (ISCRAM), Tarbes, France, 22–25 May 2022. [Google Scholar]
- Teece, D.J. Explicating Dynamic Capabilities: The Nature and Microfoundations of (Sustainable) Enterprise Performance. Strateg. Manag. J. 2007, 28, 1319–1350. [Google Scholar] [CrossRef]
- Ponomarov, S.Y.; Holcomb, M.C. Understanding the concept of supply chain resilience. Int. J. Logist. Manag. 2009, 20, 124–143. [Google Scholar] [CrossRef]
- Xu, Z.; Elomri, A.; Kerbache, L.; El Omri, A. Impacts of COVID-19 on global supply chains: Facts and perspectives. IEEE Eng. Manag. Rev. 2020, 48, 153–166. [Google Scholar] [CrossRef]
- Dubey, R.; Gunasekaran, A.; Bryde, D.; Dwivedi, Y.; Papadopoulos, T. Impact of big data analytics on supply chain resilience: Mediating role of visibility. Ann. Oper. Res. 2021, 302, 241–261. [Google Scholar]
- Zhao, N.; Hong, J.; Lau, K.H. Impact of supply chain digitalization on supply chain resilience and performance: A multi-mediation model. Int. J. Prod. Econ. 2023, 259, 108817. [Google Scholar] [CrossRef]
- Bag, S.; Wood, L.; Xu, L. Predictive analytics for building supply chain resilience. Ann. Oper. Res. 2023, 325, 271–293. [Google Scholar]
- Cooper, R.B.; Zmud, R.W. Information Technology Implementation Research: A Technological Diffusion Approach. Manag. Sci. 1990, 36, 123–139. [Google Scholar] [CrossRef]
- Khatri, V.; Brown, C.V. Designing Data Governance. Commun. Acm 2010, 53, 148–152. [Google Scholar] [CrossRef]
- Otto, B. Data Governance. Int. J. IT/Bus. Alignment Gov. 2011, 2, 4–11. [Google Scholar]
- Hazen, B.T.; Boone, C.A.; Ezell, J.D.; Jones-Farmer, L.A. Data Quality for Data Science, Predictive Analytics, and Big Data in Supply Chain Management. Int. J. Prod. Econ. 2014, 154, 72–80. [Google Scholar] [CrossRef]
- Shen, Z.M.; Sun, Y. Strengthening supply chain resilience during COVID-19: A case study of JD. com. J. Oper. Manag. 2023, 69, 359–383. [Google Scholar] [CrossRef]
- Culot, G.; Orzes, G.; Sartor, M.; Nassimbeni, G. The Data-Sharing Conundrum in the Era of Digital Transformation: Revisiting Established Theory. Supply Chain. Manag. Int. J. 2024, 29, 1–27. [Google Scholar] [CrossRef]
- Jorzik, N. Industrial Data Sharing and Data Readiness: A Law and Economics Perspective. Eur. J. Law Econ. 2024, 57, 181–205. [Google Scholar] [CrossRef]
- Gabellini, M.; Civolani, L.; Ronchi, M.; Naldi, L.D.; Regattieri, A. Data Spaces in Manufacturing and Supply Chains. Appl. Sci. 2025, 15, 5802. [Google Scholar] [CrossRef]
- Panetto, H.; Molina, A. Enterprise Integration and Interoperability in Manufacturing Systems: Trends and Issues. Comput. Ind. 2008, 59, 641–646. [Google Scholar] [CrossRef]
- Bousdekis, A.; Lepenioti, K.; Loumos, V.; Mantzari, E.; Apostolou, D.; Mentzas, G. Enterprise Integration and Interoperability for Big-Data-Driven Processes in Industry 4.0: A Systematic Literature Review. Front. Big Data 2021, 4, 624575. [Google Scholar] [CrossRef]
- Tracy, S.J. A phronetic iterative approach to data analysis in qualitative research. Qual. Res. 2018, 19, 61–76. [Google Scholar]
- Rowley, J. Conducting research interviews. Manag. Res. Rev. 2012, 35, 260–271. [Google Scholar] [CrossRef]
- Kallio, H.; Pietilä, A.M.; Johnson, M.; Kangasniemi, M. Systematic methodological review: Developing a framework for a qualitative semi-structured interview guide. J. Adv. Nurs. 2016, 72, 2954–2965. [Google Scholar] [CrossRef] [PubMed]
- Eisenhardt, K.M. Building theories from case study research. Acad. Manag. Rev. 1989, 14, 532–550. [Google Scholar] [CrossRef]
- Yin, R.K. Case Study Research and Applications, 6th ed.; Sage: Thousand Oaks, CA, USA, 2018. [Google Scholar]
- Ministry of International Trade and Industry (MITI). Industry4WRD: National Policy on Industry 4.0; Policy Document; Ministry of International Trade and Industry (MITI): Kuala Lumpur, Malaysia, 2018.
- Guest, G.; Bunce, A.; Johnson, L. How many interviews are enough? An experiment with data saturation and variability. Field Methods 2006, 18, 59–82. [Google Scholar] [CrossRef]
- Hennink, M.M.; Kaiser, B.N.; Marconi, V.C. Code saturation versus meaning saturation. Qual. Health Res. 2017, 27, 591–608. [Google Scholar] [CrossRef]
- Hagaman, A.K.; Wutich, A. How many interviews are enough to identify metathemes in multisited and cross-cultural research? Another perspective on Guest, Bunce, and Johnson’s (2006) landmark study. Field Methods 2017, 29, 23–41. [Google Scholar] [CrossRef]
- Hennink, M.; Kaiser, B.N. Sample sizes for saturation in qualitative research: A systematic review of empirical tests. Soc. Sci. Med. 2022, 292, 114523. [Google Scholar] [CrossRef]
- Yin, R.K. Applications of Case Study Research; Sage Publications: Thousand Oaks, CA, USA, 2011. [Google Scholar]
- Gosling, J.; Purvis, L.; Naim, M.M. Supply chain flexibility as a determinant of supplier selection. Int. J. Prod. Econ. 2010, 128, 11–21. [Google Scholar] [CrossRef]
- Jha, A.K.; Agi, M.A.; Ngai, E.W. A note on big data analytics capability development in supply chain. Decis. Support Syst. 2020, 138, 113382. [Google Scholar] [CrossRef]
- Kirchherr, J.; Charles, K. Enhancing the sample diversity of snowball samples. Sustain. Sci. 2018, 13, 1381–1391. [Google Scholar]
- Guest, G.; Namey, E.; Chen, M. A simple method to assess and report thematic saturation. PLoS ONE 2020, 15, e0232076. [Google Scholar] [CrossRef]
- Yin, R.K. Case Study Research: Design and Methods; Sage: Thousand Oaks, CA, USA, 2009; Volume 5. [Google Scholar]
- Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
- Saldaña, J. The Coding Manual for Qualitative Researchers, 3rd ed.; Sage: Thousand Oaks, CA, USA, 2016. [Google Scholar]
- Castleberry, A.; Nolen, A. Thematic analysis of qualitative research data: Is it as easy as it sounds? Curr. Pharm. Teach. Learn. 2018, 10, 807–815. [Google Scholar] [CrossRef]
- Braun, V.; Clarke, V. Toward good practice in thematic analysis. APA Open 2022, 2, e2129597. [Google Scholar]
- O’Connor, C.; Joffe, H. Intercoder reliability in qualitative research: Debates and practical guidelines. Int. J. Qual. Methods 2020, 19, 1–13. [Google Scholar] [CrossRef]
- Lincoln, Y.S. Naturalistic Inquiry; Sage: Thousand Oaks, CA, USA, 1985; Volume 75. [Google Scholar]
- Nowell, L.S.; Norris, J.M.; White, D.E.; Moules, N.J. Thematic analysis: Striving to meet trustworthiness criteria. Int. J. Qual. Methods 2017, 16, 1–13. [Google Scholar] [CrossRef]
- Miles, M.B.; Huberman, A.M.; Saldaña, J. Qualitative Data Analysis: A Methods Sourcebook, 3rd ed.; Sage: Thousand Oaks, CA, USA, 2014. [Google Scholar]
- Mikalef, P.; Boura, M.; Lekakos, G.; Krogstie, J. Big data analytics and firm performance: Findings from a mixed-method approach. J. Bus. Res. 2019, 98, 261–276. [Google Scholar] [CrossRef]
- Belhadi, A.; Bouzon, M.; Khan, S. Traceability and big data analytics for sustainable supply chains. J. Clean. Prod. 2024, 409, 137129. [Google Scholar]
- Demchenko, Y.; Grosso, P.; De Laat, C.; Membrey, P. Big Data architecture framework and components. In Proceedings of the 2013 International Conference on Collaboration Technologies and Systems (CTS), San Diego, CA, USA, 20–24 May 2013; pp. 104–112. [Google Scholar]
- Kumar, R.R.; Raj, A. Big data adoption and performance: Mediating mechanisms of innovation, supply chain integration and resilience. Supply Chain. Manag. Int. J. 2025, 30, 67–85. [Google Scholar] [CrossRef]
- Galbraith, J.R. Organization design: An information processing view. Interfaces 1974, 4, 28–36. [Google Scholar] [CrossRef]
- Dyer, J.H.; Singh, H. The relational view: Cooperative strategy and sources of interorganizational competitive advantage. Acad. Manag. Rev. 1998, 23, 660–679. [Google Scholar] [CrossRef]
- Poppo, L.; Zenger, T. Do formal contracts and relational governance function as substitutes or complements? Strateg. Manag. J. 2002, 23, 707–725. [Google Scholar] [CrossRef]
- Mikalef, P.; Pappas, I.O.; Krogstie, J.; Pavlou, P.A. Big data and business analytics: A research agenda for realizing business value. Inf. Manag. 2020, 57, 103237. [Google Scholar] [CrossRef]
- Ivanov, D. Supply chain viability and the COVID-19 pandemic: A conceptual and formal generalisation of four major adaptation strategies. Int. J. Prod. Res. 2021, 59, 3535–3552. [Google Scholar] [CrossRef]
- Sivarajah, U.; Kamal, M.M.; Irani, Z.; Weerakkody, V. Critical analysis of Big Data challenges and analytical methods. J. Bus. Res. 2017, 70, 263–286. [Google Scholar] [CrossRef]
- Chenger, J.; Lin, Y.; Liu, X. Leveraging big data analytics for resilient global supply chains under data privacy risks. Technovation 2023, 121, 102656. [Google Scholar]
- Rose, S.; Borchert, O.; Mitchell, S.; Connelly, S. Zero Trust Architecture; Technical Report Special Publication 800-207; National Institute of Standards and Technology (NIST): Gaithersburg, MD, USA, 2020. [Google Scholar]
- Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and open problems in federated learning. Found. Trends Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
- Hardjono, T.; Pentland, A.S. Data Cooperatives: Towards a Foundation for Decentralized Personal Data Management. arXiv 2019, arXiv:1905.08819. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).


