Next Article in Journal
Analyzing Strategic Parental Leave Decisions Using Two-Player Multi-Agent Reinforcement Learning
Next Article in Special Issue
Identification of Physical Boundary Conditions for Mechatronic Test-Case Generation Using Large Language Models and MBSE System Models
Previous Article in Journal
A Methodological Framework for Chaos-Aware Evaluation of Self-Organization in Swarm-Based Engineering Systems
Previous Article in Special Issue
Enabling Humans and AI Systems to Retrieve Information from System Architectures in Model-Based Systems Engineering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Systemic Approach to Decision Support and Automation: The Role of Big Data Analytics and Real-Time Processing in Management Information Systems

Department of Computer Engineering, Faculty of Computer and Information Technologies, Istanbul University, Istanbul 34452, Turkey
Systems 2026, 14(2), 216; https://doi.org/10.3390/systems14020216
Submission received: 9 January 2026 / Revised: 14 February 2026 / Accepted: 17 February 2026 / Published: 19 February 2026

Abstract

Management Information Systems (MIS) are increasingly expected to support real-time, evidence-based decision-making and to automate routine workflows. Nevertheless, many organizations still struggle to transform heterogeneous, high-velocity data into trustworthy decision support and process execution at scale. Adopting a socio-technical systems perspective, this study explores the interplay between data infrastructure, analytics capabilities, and decision-making processes. We adopted a mixed-methods design, which incorporated (i) a cross-sectional survey of MIS professionals (n = 150) from organizations across three industries (retail, healthcare, and financial services) and (ii) 12 semi-structured stakeholder interviews. The survey data show that the performance outcomes of the organizations reporting a higher level of BDA and maturity in real-time processing are stronger, characterized by self-reported average revenue growth of 12% among retailers, a material decrease in operational costs, and improvements in overall system efficiency. These figures reflect respondents’ estimates rather than audited financial statements. BDA, real-time processing, and data infrastructure readiness were statistically significant predictors in an OLS regression model of perceived organizational performance, explaining a substantial percentage of variance (R2 = 0.72). The insights provided by the interviews explain how these effects were achieved: performance improvements materialized through real-time feedback loops where streaming and batch pipelines were integrated, data-quality controls were embedded in ingestion, and decision outputs were linked to workflow automation. The research contributes a holistic view to the MIS capability framework, linking data infrastructure decisions to the timeliness of decisions and automation preparedness, while contributing to the theoretical refinement of MIS capability frameworks and offering practical guidance for governance and technology selection.

1. Introduction

Modern MIS requirements extend beyond periodic reporting toward continuous sensing, rapid interpretation, and timely action. The data generated by organizations has grown in volume and speed due to the adoption of digital tools, enterprise applications, and Internet-of-Things (IoT) telemetry, which has led to the trend of replacing traditional business intelligence (BI) with BDA and advanced analytics [1,2]. IS scholarship frames this shift as an inflection point, where data-intensive decision support becomes a core competency rather than a back-office function [3,4].
The technical capability to create data is, however, not synonymous with managerial value. Drivers of BDA initiatives often fail due to fragmented data ownership, poor data quality, and inadequate operationalization of analytical deliverables into daily operations [5,6]. Therefore, successful MIS implementation requires a holistic view that treats analytics not just as a toolset, but as a socio-technical system combining technical resources with governance mechanisms, process redesign, and organizational learning [7,8]. In this regard, the integration challenge is not just about more data, but about creating real-time feedback loops where trusted data pipelines directly inform explicit decision authorities and automated decision control.
The ability to process in real-time is especially consequential since it decreases the amount of time between when the signal is detected and the response of the manager. Real-time big data analytics introduces trade-offs between speed, reliability, interpretability, and cost, particularly when streaming signals must be reconciled with historical baselines and auditability requirements [9]. Parallel to this, scalable computing platforms and contemporary predictive analytics expand the list of decisions that can be forecasted or automated, including anomaly detection and optimization and dynamic pricing [10,11]. These trends raise practical questions about how MIS leaders can make timely decisions without compromising trust, compliance, or resilience [12,13].
Substantial evidence links analytics capabilities to organizational performance, but there are still two gaps. First, previous research frequently regards analytics as a singular capability. Consequently, they pay little attention to how real-time processing and data infrastructure readiness work together to make BDA better for decision support and workflow automation. Second, there is still not enough real-world evidence on the ways that analytics outputs can be turned into operational action. These mechanisms encompass governance, integrated pipelines, and workflow integration, and they may vary across diverse industry contexts.
This paper analyzes the influence of BDA maturity, real time processing and data infrastructure readiness on MIS decision support and workflow automation within this context. The study aims to achieve three objectives that correspond with its research questions. First, it finds the benefits of BDA and real-time processing for retail, healthcare, and financial services (RQ1). Second, it uses survey data to see how well BDA, real-time processing, and data infrastructure readiness explain how people think the organization is doing (RQ2). Third, paper explains the practical realization of these performance improvements by identifying implementations and managerial practices through semi-structured interviews (RQ3). With the help of combining quantitative and qualitative data the study creates a framework for BDA implementation that is based on real world evidence and gives MIS leaders useful advice, emphasizing the importance of methodological transparency and practical contribution.
This study makes three contributions to MIS research and practice. First, it extends capability-based views of analytics by jointly modeling BDA maturity, real-time processing capability, and data infrastructure readiness as complementary drivers of decision support and workflow automation [4,14,15]. Second, it provides mixed-method evidence across retail, healthcare, and financial services by combining survey-based associations with interview-based mechanisms, thereby adding explanatory depth. Third, it proposes actionable implementation patterns for analytics based on real-world examples that show governance, ingestion-level data quality, unified pipelines, and workflow coupling as useful tools.

2. Literature Review

2.1. Theoretical Foundations of BDA as a Strategic MIS Capability

Early scholarly work conceptualized Big Data as a transformative advancement, enabling organizations to utilize high-velocity data and sophisticated modeling to enhance operational efficiency and strategic decision-making [1,2]. Subsequent Information Systems (IS) research refined this perspective, establishing that BDA is not a standalone technology but a complex socio-technical assemblage of data, tools, skills, and organizational routines [5,7]. Operationally, BDA initiatives encompass descriptive business intelligence, diagnostic analytics, predictive models, and prescriptive optimization, each requiring distinct data architectures and governance frameworks [3,11].
Empirical research consistently associates BDA maturity with enhanced firm performance; however, this impact is contingent upon complementary resources such as analytical talent and strategic alignment [15,16]. Dynamic-capability views hold that organizations generate value when they are able to sense data-driven opportunities, seize these through decision redesign, and reconfigure resources in a scalable manner [14,17]. This alignment is further elucidated when organizations are viewed as Complex Adaptive Systems (CAS), where supply chain complexity is explicitly examined through lenses such as complex adaptive systems theory. In such systems, resilience is fundamentally associated with maintaining a steady state by either returning to a pre-disruption stage or adapting to a new equilibrium, and with properties such as self-organization, learning, and adaptation in response to disturbances. Because these environments involve diverse actors and chaotic conditions, the information sharing and coordination required tend to become highly complex during disruptions. Accordingly, BDA can be positioned not merely as a reporting tool but as a continuous sensing and sensemaking capability that enhances situational awareness and guides the allocation of critical resources, while also contributing to lowering information complexity along key decision pathways in turbulent contexts [18,19]. The work on related issues also highlights market-driven learning and adaptive practices to facilitate the transfer of analytics findings into product and process enhancements [4,20].
There are sector applications which demonstrate the promise and design limitations of BDA. Within the field of supply chains and manufacturing, analytics can be used to enhance forecasting, inventory choices, and process reliability [21,22,23,24]. Big Data is useful in population analytics and clinical decision support in healthcare, whereas it increases the demands of quality and governance because of the limitation of safety and privacy [13,25]. Reinforcing this socio-technical perspective, recent findings by Pejić Bach et al. [26] indicate that clinicians’ intention to use such digital systems is significantly associated with perceived time savings rather than privacy protection guarantees, a distinction that highlights the critical need to couple speed and efficiency benefits with trust-oriented governance designs. BDA aids in the personalization and marketing optimization in the context of customer-facing, but the advantages of the operational aspect are contingent upon the timeliness of the data and the integration of systems [27,28]. Such cross-domain results mean that the value of MIS is developed when analytics is integrated into the business processes and managed as a strategic ability but not as an independent technological initiative [8,29].

2.2. Real Time Processing and Operational Decision Support

Real-time processing refers to the capability to ingest, process, and act on data with low latency, enabling decisions to be made while the operational context is still unfolding. Augmented by survey and systems research, there are several technical pathways such as stream processing engines, in-memory computation, and micro-batching which have performance and cost trade-offs [9,30]. Real-time analytics particularly applies to fast-paced settings like fraud detection, price optimization, customer experience, and logistics management, where delayed decision-making can cost a company an opportunity in revenue terms or an increase in risk [10,31].
Recent systems-oriented work in logistics highlights the warehouse as a critical component of information-intensive operations where facilities are increasingly treated as strategic elements of big data flows. According to Vaičiutė and Katinienė [32], improving warehouse information systems alongside related automation technologies can reduce human error, accelerate processes, and strengthen operational safety and service quality. This operational lens implies that real-time processing capability in MIS depends not only on analytics engines but also on the reliability of event capture and execution interfaces such as WMS and ERP integrations that serve to close real-time feedback loops.
However, real-time processing can increase architectural complexity, despite its operational appeal. The organizations need to align streaming data with past baselines, event correctness, and resilience under failure conditions (e.g., delayed events, outages, or partial data loss) [6]. These demands add to the significance of the data infrastructure decisions, in particular, the combination of storage, compute, and governance layers, to mitigate the operation brittleness and be able to do continuous monitoring [13,33].
Decision support is also changed by the real-time processing. With shorter decision cycles, organizations tend to use automated alerts or machine-generated recommendations, and this raises the demand of transparency, interpretability, and design of a human checkpoint [11]. In e-commerce and data-driven marketing, such as livestream-selling performance and the quality of personalization, real-time analytics will have an impact on user experience and platform results [27,34]. Concurrently, platform governance and regulatory forces insist that the real-time decisions should be auditable and meet the policy limitations [12].

2.3. Data Infrastructure Preparedness, Integration and Governance

Data infrastructure preparedness defines how an organization has trusted mechanisms of data ingestion, storage, integration, and access to support analytics and decision support. In the studies of BDA architectures, it is stressed that infrastructure should support both structured enterprise data (e.g., ERP/CRM logs) and semi-structured/unstructured data (e.g., clickstreams, sensor data), quality, security, and lineage should be preserved [2,9]. Modern systems are moving more and more to data-lake or lakehouse designs that bring together batch and streaming pipelines, although only governance and metadata discipline will be successful [8,35].
The issues of integration are especially topical in the sphere of MIS since the decision support can involve numerous functions and systems. Analytics-enabled transformation research observes that the boundary failures of integration usually happen at organizational levels, where decision rights are not well defined, data definitions are inconsistent, or incentive systems are not designed to allow sharing [7,29]. Besides, the analytics value is lowered in cases where infrastructure does not allow scaling experimentation, model monitoring, and iterative improvement cycles [10,36]. These problems support the perception that infrastructure preparedness is not a back-end problem, but a governance and ability-building concern that is at the center of MIS performance [3,13].
Recent systems-oriented research demonstrates that integration challenges and governance gaps are not limited to large enterprise settings but also emerge in domain-specific contexts. For instance, work by Varmus et al. [37] highlights that while integrated sports information systems can improve data management efficiency and support managerial decision-making, implementations frequently face systemic frictions including inefficiency, data duplicity, and significant time demands. Their recommendations advocating for transparency, expanded automation, and strategic planning reinforce the argument that infrastructure preparedness must be treated as a socio-technical and governance capability rather than a purely back-end technology choice. Furthermore, the authors emphasize that the outputs of integrated systems should remain intuitive and usable for decision makers to ensure that adoption and correct use do not degrade even when the technical integration is in place.
In order to reconcile a batch-oriented Big Data Analytics with a low-latency stream processor, a Lambda Architecture (batch layer, speed layer, and serving layer) or, where event streams are used as the system of record, a Kappa Architecture advocating stream-first processing and replay, reprocessing to ensure consistency and auditing are common in modern MIS data platforms.
A synthesis of the literature examined in Section 2.1, Section 2.2 and Section 2.3 identifies four primary deficiencies that collectively underpin the current study. First, the prevailing empirical framework regards BDA as a singular capability construct, thereby concealing the unique impacts of real-time processing latency and data infrastructure readiness on organizational performance [14,15]. Second, while numerous conceptual frameworks delineate BDA value-creation pathways, quantitative evidence demonstrating that infrastructure readiness independently facilitates—rather than merely correlates with—analytics outcomes is limited [2,4,35]. Third, the operational mechanisms that convert analytics outputs into workflow actions—such as pipeline unification, ingestion-level quality gates, and automation coupling—have been primarily recorded in single-industry case studies and are devoid of cross-sector comparative evidence [5,6]. Fourth, current quantitative studies mainly use survey-only designs, which show statistical links but can’t explain how or why those links happen in real life. To fill this gap, we need to use mixed-method triangulation [14,20]. The conceptual model and research design delineated in the subsequent sections are formulated to rectify each of these deficiencies.

2.4. Conceptual Model and Hypothesised Relationships

Synthesizing the extant literature facilitates the conceptualization of the organizational performance improvements in MIS as a process of (i) BDA capability maturity, (ii) real-time processing capability maturity and (iii) data infrastructure readiness. Existing empirical results indicate that there is a positive correlation between analytics capabilities and performance, where complementary resources and decision integration are moderate [14,15,16]. Real-time processing enhances this correlation by minimizing the delay, which the decision requires to take an action, allowing doing it within the window within which the intervention can be effective [9]. Lastly, data infrastructure preparedness is a facilitating factor because it enhances quality, accessibility, and interoperability of systems of data [8,35]. The proposed model is based on three different theoretical arguments that work together. The resource-based view and its dynamic-capabilities extension posit that BDA maturity creates value through the integration of analytical assets with organizational routines that identify, exploit, and reconfigure opportunities [14,17]. Second, real-time processing capability is theoretically different from BDA because it focuses on time proximity instead of analytical sophistication. Its value comes from allowing action within the operational decision window, which is a mechanism that has been documented in time-critical areas [9,30]. Third, data infrastructure readiness is an enabling condition, not a direct performance driver. It makes sure that data is of high quality, can be used by different systems, and is only accessible by those who need it, which makes both BDA and real-time processing work reliably [2,4]. The three constructs are theoretically complementary: BDA serves as the analytical engine, real-time processing ensures temporal responsiveness, and infrastructure readiness establishes the foundational trust layer.
The empirical model adopted in this study requires that perceived organizational performance should be regarded as the dependent construct to be tested whereas BDA, real-time processing and data infrastructure preparedness are the independent predictors. The model provides an analytical structure to examine associations among the constructs and does not imply causal effects, consistent with cross-sectional MIS survey research [4,11].
These theoretical arguments produce three directional propositions that inform the empirical analysis. First, a higher level of BDA capability maturity is anticipated to correlate positively with perceived organizational performance, illustrating the resource-based theory that integrated analytical routines produce decision-relevant value [14,17]. Second, better real-time processing power is expected to make this link stronger on its own by shortening the time between the signal and the action in operational decision windows [9,30]. Third, readiness of data infrastructure is expected to be a necessary enabling condition: organizations with higher integration maturity, metadata governance, and quality assurance mechanisms are expected to see better returns on investments in both BDA and real-time processing [2,4]. These propositions collectively characterize analytics-driven MIS performance as a result of synergistic capabilities rather than a singular technological determinant.
Table 1 illustrates the systemic view of MIS modernization adopted in this study, mapping the interaction between data infrastructure, analytics services, decision design, and workflow execution. This architectural framework depicts the socio-technical integration of real-time processing as a cohesive progression through five distinct layers. At its foundation, the data layer manages the ingestion of heterogeneous logs and IoT streams, adhering to rigorous protocols for quality and accessibility. These data streams subsequently flow into the intelligence and decision layers. In these stages, predictive services are coupled with strategic constraints, including human-in-the-loop checkpoints and formalized decision rules, to convert raw signals into meaningful guidance. The resulting outputs are then operationalized in the execution layer via automated workflows and APIs. To maintain system resilience, a dedicated feedback layer provides continuous oversight through drift monitoring and retraining triggers. Finally, this entire lifecycle is governed by a framework of governance and controls, ensuring that real-time automation remains transparent and consistently aligned with the strategic decision rights of the CIO and the board.
A conceptual contrast of MIS capability profiles across traditional, BI-enhanced, and AI/ML-enabled configurations as illustrated in Figure 1 serves to clarify the central trade-off motivating this study. In particular, this comparison highlights that greater predictive and prescriptive power, alongside faster decision cycles, require stronger explainability practices and higher governance maturity to sustain trustworthy automation.
The logic of continuous monitoring within real-time decision support systems is captured in Figure 2, where monitoring triggers serve as system-level control points. As performance drifts below a predefined threshold, these triggers initiate human review or retraining sequences, thereby protecting decision quality and ensuring operational resilience.

3. Materials and Methods

3.1. Research Design

We employed a mixed-methods design consisting of (i) a quantitative survey and (ii) semi-structured interviews to contextualize and explain the quantitative associations. We chose a mixed methods design to combine breadth with explanatory depth. The survey evaluates the generalizability of the proposed capability-performance relationships across organizations, while the interviews elucidate the underlying mechanisms (e.g., governance and workflow integration), thereby enhancing interpretive validity through triangulation. This sequential explanatory design is well-established in Management Information Systems (MIS) research, employing quantitative survey data to test hypothesized relationships, followed by qualitative interviews to elucidate the mechanisms underlying the observed patterns [4,8]. A purely quantitative approach would identify statistical correlations without elucidating their practical manifestations; conversely, a purely qualitative approach would provide depth but lack broad applicability. The integration of mixed methods addresses both issues. This approach is consistent with MIS research practices, relying on survey evidence to test construct relationships and qualitative evidence to uncover mechanisms and contextualize the observed effects [4,8]. The quantitative phase measured perceived maturity of BDA, real-time processing, and data infrastructure readiness, and examined their relationship with perceived organizational performance. The qualitative stage then reviewed implementation practices and constraints that influenced how analytics outputs were incorporated into operational decision-making.

3.2. Literature Scoping and Search Criteria

To be certain that the conceptual framing and survey measures were based on the existing evidence, we conducted a scoping review exercise with the specific literature before the instruments finalization. Searches were conducted in Scopus, Web of Science, IEEE Xplore, ACM Digital Library, ScienceDirect, and Google Scholar. The primary key word phrases were the following: Big Data Analytics AND Management Information Systems, real-time processing OR stream processing AND decision support, data infrastructure OR data lake AND analytics, and business intelligence AND organizational performance. Our search preferences were limited to peer-reviewed journal articles and high-impact conference papers as published between 2011 and 2025, in line with the timeline of BDA and streaming architecture becoming mainstream in IS and analytics studies [2,7].

3.3. Quantitative Phase Survey Sampling and Measures

The survey was conducted among professionals in MIS and senior analysts in organizations where active BDA and/or real-time analytics programs were reported. Targeted professional networks and industry associations were used as the recruitment strategy, which led to a final sample of 150 valid responses. The respondent profile was strategically diverse, comprising MIS Managers (32%), Data Engineers and Architects (28%), Senior Business Analysts (25%), and IT Operations Leads (15%), ensuring a comprehensive perspective on both technical infrastructure and managerial outcomes. The sample included participants located in North America, Europe, and Asia, as these regions are characterized by a relatively large adoption of enterprise analytics tooling [33,43]. Furthermore, the three target industries, namely retail (n = 50), healthcare (n = 50), and financial services (n = 50), were specifically selected to validate the model’s robustness across diverse system dynamics: high-velocity transaction environments (Finance), safety-critical and error-intolerant systems (Healthcare), and rapid-response supply chain systems (Retail).
The Likert scales were a five-point scale (1 = strongly disagree to 5 = strongly agree) which measured: (i) BDA capability maturity (e.g., breadth, availability of skills, and practice of model deployment), (ii) real-time processing capability maturity (e.g., streaming ingestion, latency controls, and real-time alerting), (iii) data infrastructure maturity (e.g., integration, governance, and data quality controls), and (iv) perceived organizational performance outcomes (e.g., decision timeliness, error reduction, cost reduction). Perceived organizational performance was measured using a multi-item scale because objective performance indicators were not consistently available or comparable across participating organizations [44]. The choice of constructs was based on previous BDA capability and performance studies [14,15,20]. Specifically, the measurement items were adapted from validated prior scales rather than newly developed. BDA capability maturity was operationalized using four items adapted from Wamba et al. [15] and Mikalef et al. [14], capturing breadth of analytics deployment, skill availability, model lifecycle management, and cross-functional analytics integration (e.g., “Our organization systematically deploys predictive and prescriptive analytics across multiple business functions”). We used four items from Yadranjiaghdam et al. [9] and Jabbar et al. [28] to measure real-time processing capability. These items looked at streaming ingestion maturity, latency controls, real-time alerting, and event-time correctness (for example, “Our systems can ingest and process streaming data with sub-minute latency”). Data infrastructure readiness comprised four items drawn from Chen et al. [2] and Grover et al. [4], reflecting data integration maturity, governance controls, quality assurance mechanisms, and metadata management (e.g., “Our organization maintains a unified data catalogue with documented lineage and access controls”). Perceived organizational performance was measured with five items adapted from Wamba et al. [15] and Suoniemi et al. [20], covering decision timeliness, error reduction, operational cost efficiency, revenue contribution, and system reliability (e.g., “Analytics-supported decisions in our organization are consistently more timely than they were two years ago”). Respondents represented multiple organizations within each industry; the study does not rely on a single-firm sample. Eligibility required respondents to confirm that their organization had an active analytics initiative (BDA and/or real-time processing) and that they were directly involved in related MIS, data, or analytics activities.
Table 2 reports internal consistency and descriptive statistics for each construct. All Cronbach’s alpha values exceed the conventional 0.70 threshold, indicating acceptable reliability [14,15]. The 150 respondents represented approximately 85 distinct organizations; some organizations contributed two or three respondents holding different functional roles (e.g., MIS manager and data engineer from the same firm). To determine if intra-organizational clustering affected the results, we recalibrated the regression model using cluster-robust standard errors at the organizational level; the significance and direction of all coefficients remained fundamentally unaltered.

3.4. Qualitative Phase: Case-Based Interviews

To ensure methodological rigor and enhance the practical realism of the research, the qualitative phase was designed around twelve semi-structured interviews focusing on sector-specific implementation practices, with each session lasting between 45 and 60 min. This sample size was selected to facilitate a deep exploration of implementation nuances that a large-scale survey might overlook. The primary stakeholders included MIS managers, data engineers, analytics leaders, and business process owners. To allow for the triangulation of shared themes while preserving sector-specific dynamics, the sample was equally distributed across the three target industries, resulting in four interviews per sector [23,25].
Comparisons across the cases were enabled through a standardized interview protocol, ensuring consistency and reliability across all sessions. This protocol was structured around four core investigative pillars: (i) the configuration of data pipelines between source systems and automated actions; (ii) the identification of systemic sources of latency and data quality degradation; (iii) the operationalization of real-time alerts via ticketing systems, playbooks, and automated controls; and (iv) the assessment of data lineage, access control, and model performance monitoring. These focal points directly address the common implementation challenges identified in contemporary analytics infrastructure and governance research [6,12,35].

3.5. Analysis and Statistical Assumption Tests

This study employed a dual-phase analytical approach to process the data collected from the 150 survey respondents and 12 stakeholder interviews.

3.5.1. Statistical Specification

To examine the associations between the core MIS capabilities and perceived organizational performance, the following Ordinary Least Squares (OLS) regression model was estimated: We chose OLS regression because it is simple and easy to understand, which is in line with previous MIS capability–performance studies that use composite score-based predictors instead of reflective latent constructs that need structural equation modeling [14,15]. Because each construct was operationalized as an equally weighted composite of its items, a common approach when the primary objective is to test directional associations rather than to model measurement error, OLS provides unbiased coefficient estimates under the standard Gauss–Markov assumptions. We acknowledge that SEM or PLS-SEM would be preferable if the objective were to model latent measurement structures and indirect effects simultaneously; however, with a sample of 150 and four composite variables, OLS offers a conservative and transparent test of the hypothesized associations.
Y = β 0 + β 1 ( B D A ) + β 2 ( R T P ) + β 3 ( D I R ) + ε
where
  • Y = Perceived Organizational Performance;
  • BDA = Big Data Analytics Capability Maturity;
  • RTP = Real-Time Processing Capability Maturity;
  • DIR = Data Infrastructure Readiness;
  • ε = Stochastic error term.

3.5.2. Quantitative Analysis and Diagnostic Tests

Descriptive statistics (Tables 3 and 4) and an ordinary least squares (OLS) regression model were used to analyze the survey data. Prior to estimation, we assessed standard diagnostics. Multicollinearity was examined using variance inflation factors (VIF); all VIFs were below 2.0. Residual normality was assessed using Q–Q plots and the Shapiro–Wilk test, and no material departures were observed. To evaluate potential common method bias, we conducted Harman’s single-factor test; the first factor explained 36.4% of the variance, below the conventional 50% threshold. Together, these diagnostics indicate that OLS assumptions are reasonably satisfied for inference [4,11]. As additional robustness checks, we assessed heteroskedasticity (Breusch–Pagan) and influence (Cook’s distance/leverage). The results were substantively unchanged when using heteroskedasticity-robust (HC3) standard errors. We note that the R2 value of 0.72 is notably high for cross-sectional survey research in MIS, where values of 0.30–0.50 are more typical [14]. While VIFs remained below 2.0 and the Harman single-factor test indicated that common method variance alone does not account for the majority of variance, the high R2 may partly reflect shared method variance arising from same-source, same-time measurement of predictors and outcomes. As a further precaution, we examined a marker-variable proxy by partialling out the lowest-correlated item (a data-infrastructure item with the weakest bivariate correlation to perceived performance, r = 0.31). After partialling, the adjusted R2 decreased marginally from 0.69 to 0.67, and all three predictor coefficients retained their direction, magnitude, and significance (BDA: β = 0.39, p = 0.002; RTP: β = 0.32, p = 0.009; DIR: β = 0.22, p = 0.012). This suggests that while shared method variance may modestly inflate the explained variance, it does not account for the observed associations. We therefore interpret the explained variance conservatively and encourage future research to validate these associations with multi-informant designs and objective performance metrics.

3.5.3. Qualitative Thematic Analysis

The qualitative interview data were transcribed and coded with the help of a thematic analysis. The first codes were based on the existing literature, specifically focusing on data quality controls, pipeline integration, governance mechanisms, and automation coupling, and were gradually developed as new themes appeared [5,13]. The last themes were summarized in a cross-case matrix and applied to the quantitative results to formulate managerial suggestions. Coding proceeded in two cycles using an iteratively refined codebook. An initial codebook was derived from prior literature, then expanded as new themes emerged. A subset of transcripts was re-coded after refinement to check consistency, and coding continued until no substantively new themes emerged.

4. Results

Table 3 and Table 4 provide the summaries of self-reported operational and business effects of BDA and real-time processing in the participating organizations. In line with previous findings that analytics can help enhance efficiency and the quality of decisions when integrated into organizational routines [15,16], the respondents stated that their three areas of interest have improved. The reported performance gains were most pronounced in organizations with high maturity in real-time processing (e.g., streaming ingestion and low-latency alerting) and robust data infrastructure (e.g., granular access controls and cross-system integration).
According to Table 3, retail respondents expressed the most significant increase in inventory optimization (30%), which is aligned with the priority of fast demand sensing and replenishment in customer-facing operations [10,28]. The 25% improvement in processing speed reported by financial services aligns with the sector’s dependence on real-time anomaly detection [31]. Healthcare respondents reported a 15% reduction in hospital readmissions, attributed to the increased use of real-time analytics in clinical decision support and resource allocation [13,25].
Because these outcomes are self-reported, they should be interpreted as directional estimates of perceived impact rather than as audited performance measures.

4.1. Qualitative Themes from Semi-Structured Interviews

The qualitative interview stage was used to clarify the mechanisms underlying the reported performance improvements and to identify recurring implementation practices across organizations. Consistent with a dynamic capabilities view of analytics value creation [14,20], interviewees emphasized that performance gains depended less on isolated tools and more on coordinated changes in infrastructure, governance, and process integration. The particular themes that were salient to a certain extent were three cross-case themes, which included: (i) integrating streaming and batch pipelines to prevent conflicting sources of truth; (ii) integrating data quality, lineage, and access controls into ingestion and transformation; and (iii) integrating analytics outputs with operational processes using playbooks and system integrations.
These themes have been condensed in Table 5 and anonymized illustrative portions have been given. The retail stakeholders noted that near real-time inventory indicators were only effective in improving replenishment decisions when the point-of-sale feeds and warehouse feeds were standardized and verified. The stakeholders in the field of financial services have placed a lot of importance on event-time correctness, auditability, and monitoring models, which is in line with the regulatory and risk requirements [12,31]. Stakeholders in healthcare stated that operational analytics were increasingly adopted when the governance decreased access friction without loss of privacy so that the timely staffing and utilization decisions could be made [13,25].

4.2. Regression Results

To examine the association between core MIS capabilities and perceived organizational performance, we estimated an OLS regression model with performance as the dependent variable and BDA, real-time processing, and data infrastructure readiness as predictors. All three predictors were found to be statistically significant, and the model explains a substantial share of variance in perceived organizational performance (R2 = 0.72; Adjusted R2 = 0.69). This trend is consistent with the previous capability-based descriptions that associate the analytics resources and routines with the benefits of performance [15,17]. Table 8 shows that BDA capability is positively associated with perceived organizational performance (β = 0.41, p = 0.001). Real-time processing is also positively associated with performance (β = 0.34, p = 0.007). This aligns with prior work suggesting that reduced latency improves operational decision quality when supported by appropriate governance and process design [9,30]. The readiness of data infrastructure (β = 0.24, p = 0.008) was also significant, and it proves that integration, quality controls, and controlled access are the conditions of reliable analytics and automation [6,35].
Table 6 presents the inter-construct correlation matrix with Cronbach’s alpha values reported on the diagonal. All bivariate correlations are positive and statistically significant (p < 0.01), consistent with the theorized complementarity among capabilities. The moderate-to-high correlations (r = 0.52–0.68) suggest related but empirically distinguishable constructs; VIF values below 2.0 confirm that multicollinearity does not pose a concern for the regression estimates.
To assess the relative contribution of each predictor, a series of nested models was estimated (Table 7). Model 1 included only BDA (R2 = 0.46), Model 2 added real-time processing (ΔR2 = 0.17, p < 0.001), and Model 3 added data infrastructure readiness (ΔR2 = 0.09, p < 0.01). The incremental variance explained by each predictor supports treating BDA, real-time processing, and infrastructure readiness as complementary rather than redundant capabilities.
Table 7. Nested Model Comparison.
Table 7. Nested Model Comparison.
ModelR2Adj. R2ΔR2F-Change
M1: BDA only0.460.46
M2: +RTP0.630.620.17 ***67.54 ***
M3: +DIR (full)0.720.690.09 **46.93 **
Note. *** p < 0.001, ** p < 0.01. ΔR2 = change in R2 relative to prior model—indicates not applicable.
Table 8. Regression Analysis Explaining Perceived Organizational Performance.
Table 8. Regression Analysis Explaining Perceived Organizational Performance.
VariablesCoefficientStandard Errort-Valuep-Value
Big Data Analytics (β1)0.410.123.750.001
Real-Time Processing (β2)0.340.142.710.007
Data Infrastructure (β3)0.240.102.700.008
R20.72
Adjusted R20.69
Note. Standardized coefficients are reported.

5. Discussion

5.1. Interpretation of the Capability-Performance Relationships

The results provide evidence consistent with the complementary role of BDA maturity, real-time processing capability, and infrastructure readiness in supporting MIS performance. BDA shows the strongest association with perceived organizational performance. This is theoretically plausible because BDA capability extends beyond analytical techniques to include translation practices that embed analytics into routines and continuously refine analytical assets [8,17]. This interpretation is consistent with the interview insights. Stakeholders reported improved performance only after analytics outputs were integrated into decision routines and workflows, in line with capability-based accounts of analytics-enabled transformation [15,16].
There is a positive relationship between performance and real-time processing. This backs up the idea that reducing latency is important when making decisions takes a lot of time and operations depend on real-time situational awareness instead of historical averages. Previous evidence from platform and marketing contexts indicates that feedback loops can enhance performance improvements by influencing user behavior and subsequent demand outcomes [10].
Building on these insights, the study offers supplementary evidence across diverse system environments. In fast-moving retail systems, real-time inventory signals make it easier to get feedback on when to restock. In the financial services industry, the focus is on finding anomalies in less than a minute that help keep transactions safe and control risk [31]. In critical systems that can’t afford to make mistakes, like healthcare, making the time between detecting a signal and responding to it shorter can help make the system more resilient.
Even with BDA and real-time processing, infrastructure readiness is still important. This backs up the idea that infrastructure is not just a back-end commodity, but a tool that makes scalable analytics, monitoring, and decision governance possible [2,35]. Interviewees reported that having shared definitions and a “single source of truth” across reporting and operating systems cut down on internal conflict and sped up decision-making. This is in line with the idea that the value of analytics depends on how well different parts of an organization work together and how well standards and governance are put into place [7,29]. Overall, performance gains are most likely when analytics outputs are embedded in operational routines. Governance and infrastructure that support scalable experimentation and monitoring further reinforce these gains. The salience of these mechanisms may vary by context, particularly in high-velocity, highly regulated, or error-intolerant environments. Given the cross-sectional design and perceptual measurement, these interpretations should be read as associative patterns and plausible mechanisms rather than causal effects.

5.2. Forward-Looking Implications: AI/ML and Generative AI in MIS

While this study focuses on BDA, real-time processing, and infrastructure readiness, MIS modernization trends increasingly emphasize AI/ML and, more recently, Generative AI. Although Generative AI was not directly measured in our survey or interviews, we discuss it here as a forward-looking implication. This implication is derived from the capability and governance mechanisms identified in this study. Organizations are incorporating predictive models into decision processes for forecasting, detection, and optimization. They are also exploring LLMs to query and summarize information and to enable natural-language decision support [36]. These applications could make decisions faster and easier to get to. But they can also make the standards for data quality, monitoring, and governance stricter, especially when outputs are closely linked to workflows and automation [11,30].
Generative AI could add new needs to infrastructure in three ways. First, it can make data more than just tables by adding documents, transcripts, and multimodal objects. This growth makes it even more important to have strong metadata, lineage, and access control to keep sensitive information from being shared inappropriately [12,13]. Second, it may make model monitoring and drift control more important for operations, since changes in user behavior, policy, or data distributions can make models less effective. This indicates a necessity for ongoing assessment and regulated retraining [30,35]. Figure 2 shows how predefined monitoring triggers can help keep decision automation working over time. Third, it will probably put more focus on high-quality retrieval and managing context in decision support. For compliance and troubleshooting, this usually needs curated knowledge stores, vector indexing methods, and prompt/response logging that can be checked [33,36]. These points are presented as implications and avenues for future research, rather than as empirically validated findings from the current dataset.

5.3. Managerial Implication and Actionable Recommendations

The results give MIS leaders useful advice on how to put BDA and real-time decision support into action. First, managers should combine separate on-premises reporting databases into data lakes or lakehouses that can grow as needed. They should be able to handle both batch and streaming workloads with the same rules. This consolidation reduces KPI inconsistency across teams. It also enables a unified pipeline in which streaming signals are interpreted against historical baselines and shared definitions, consistent with the interview evidence [2,9].
Second, leaders should implement data-quality and validation controls at ingestion rather than relying on downstream cleaning. This includes schema validation and upstream feed anomaly detection. It also includes automated reconciliation between streaming and batch aggregates. Data-observability dashboards with clear ownership and escalation paths further strengthen control. Together, these checks increase confidence and reduce the risk of automating decisions based on untrustworthy inputs, which is commonly highlighted in analytics-transformation research [5,13].
Third, MIS leaders need to pair analytics with the operational processes in order to transform analytics into the performance outcomes. In practice, this will be the definition of decision playbooks and incorporation of alerts into ticketing systems, approval flows, or automated controls as opposed to passive dashboards. As an example, fraud alerts would directly invoke the queues of investigation with standardized evidence packets and inventory forecast would invoke replenishment recommendations coupled with procurement or warehouse systems. Such a workflow coupling is a manifestation of the mechanisms of automation acquisition that are observed when the decision outputs are of the type of work objects [16,31].
Lastly, talent and governance would be considered as equal investments. The value of analytics is based on inter-functional cooperation and the creation of market-oriented capabilities to convert data into customer and working results [15,20]. Organizations should ensure clear ownership of data products. They should define decision rights for automated actions. They should also invest in training that aligns technical teams and business stakeholders on shared metrics and responsibilities. It is especially essential with the adoption of Generative AI interfaces by organizations because it potentially raises the areas of abuse unless governance frameworks do not advance concomitantly [12].

5.4. Implications of Theory and Research

Research-wise, the results align with capability-based accounts in which performance gains arise from complementarities among analytics, infrastructure, and organizational routines rather than from any single technology. This is consistent with the existing empirical literature that found the value of analytics to be dependent on complementary resources and governance systems [14,15]. Relative to Mikalef et al. [14], who demonstrated that BDA capability and dynamic capabilities jointly predict firm performance, this study extends the model by explicitly incorporating real-time processing and infrastructure readiness as separate predictors, showing that each contributes incremental explanatory power (Table 7). This study presents mixed-method evidence that connects specific implementation mechanisms unified pipelines, ingestion-level quality controls, and workflow coupling to the performance patterns identified in survey data, in contrast to Grover et al. [4], whose framework delineates value creation pathways for BDA at a conceptual level. The results also emphasize the necessity of differentiating infrastructure preparedness from analytics capability in empirical models. Infrastructure shapes the conditions that enable scalable experimentation, monitoring, and the integration of analytics into decision-making [2,7]. This study can be extended in the future, with the researcher seeking longitudinal adoption paths and by validating the objective operational metrics and perceived performance measures [4,41].
Table 9 presents a systematic comparison with important earlier research to place the current findings in the larger BDA capability–performance literature. By breaking down analytics capabilities into three complimentary aspects and offering mixed-method evidence that solely quantitative designs cannot provide, the comparison shows how this study builds on previous work.
As Table 9 illustrates, the higher R2 should be interpreted cautiously given the shared method variance discussion in Section 3.5.2; nonetheless, the nested model analysis (Table 7) confirms that each predictor contributes significant incremental variance, supporting the complementarity thesis rather than construct redundancy.
Our dependent variable is based on respondents’ perceived organizational performance, which may introduce common method bias and social desirability effects. While perceptual measures are widely used in MIS survey research when objective performance data are not consistently available across firms, they can still inflate observed relationships because predictors and outcomes are collected from the same informant at a single point in time [45,46]. To mitigate this risk, the survey emphasized anonymity and focused items on organization-level outcomes rather than individual behaviors, and the qualitative interviews were used to triangulate and contextualize the quantitative patterns. Future research could strengthen external validity by collecting objective operational metrics and by using multi-informant or longitudinal designs to further reduce potential bias.

6. Limitations

There are a number of limitations that need to be considered in the interpretation of the findings. First, the quantitative data are based on self-reported data by practitioners, and this could introduce common-method bias or optimistic-reporting, especially with data related to revenue or retention which might not be measured across firms. In addition, because predictors and outcomes were collected from the same respondent at a single point in time, common-method bias and social desirability may inflate the observed associations [46]. While we mitigated this risk by ensuring anonymity, using organization-level wording, and triangulating with interview evidence, it cannot be fully ruled out and should be considered when interpreting effect magnitudes. Second, the sample focuses on the organizations of North America, Europe, and Asia; adoption directions and infrastructure limitations in the emerging markets could be different because of regulatory, talent, and investment factors [41]. Third, the survey is cross-sectional and therefore does not support causal claims. Although the mixed-methods design strengthens interpretive plausibility, future work should validate these relationships using longitudinal designs and objective operational metrics where available [4]. Lastly, the qualitative stage is based on a limited number of case-specifically focused interviews; despite its suitability in the mechanism discovery, a more extensive qualitative sampling may reveal more sector-specific limitations and alternative ways of implementation.

7. Conclusions and Future Works

This paper has examined how BDA maturity, real-time processing capability, and data infrastructure preparedness relate to decision support and workflow automation in MIS. Using a mixed-methods design, the survey results indicate that organizations reporting greater capability maturity also report higher perceived organizational performance, and that BDA, real-time processing, and infrastructure preparedness are each associated with perceived performance (R2 = 0.72). Given the cross-sectional design, these findings are associative and should not be interpreted as causal. The interview evidence helps contextualize the observed relationships by highlighting implementation mechanisms such as unified pipelines, ingestion-level data quality control, and workflow coupling that may help translate analytics outputs into operational action.
The resulting systemic contribution of this study is a conceptually grounded framework based on mixed-method evidence; it is not tested as a configurational model. The observed relationships should be further examined through longitudinal designs and objective operational measures (e.g., end-to-end latency, cycle times, error rates) to complement perceptual measures. Future research may also examine how emerging capabilities, particularly Generative AI interfaces and autonomous decision workflows, transform infrastructure requirements, governance practices, and accountability standards across various sectors [12,36]. Lastly, sector-specific research that takes into account contextual factors like regulatory intensity, data sensitivity, and platform-based customer engagement flows can help improve the proposed playbook. This includes livestream commerce, where the speed of data and user behavior interact through complicated feedback loops [34,43,47].

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the non-interventional nature of the research involving professional stakeholders.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The author thanks the industry professionals who participated in the surveys and interviews for their time and valuable insights.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
APIApplication Programming Interface
BDABig Data Analytics
BIBusiness Intelligence
BPMBusiness Process Management
CASComplex Adaptive Systems
CIOChief Information Officer
CRMCustomer Relationship Management
ERPEnterprise Resource Planning
HRISHuman Resource Information System
IoTInternet-of-Things
ISInformation Systems
KPIKey Performance Indicator
LLMLarge Language Model
MISManagement Information Systems
MLMachine Learning
OLSOrdinary Least Squares
RPARobotic Process Automation
VIFVariance Inflation Factors
WMSWarehouse Management System
CMBCommon Method Bias
DIRData Infrastructure Readiness
PLS-SEMPartial Least Squares Structural Equation Modeling
POPPerceived Organizational Performance
RTPReal-Time Processing
SEMStructural Equation Modeling

References

  1. Agarwal, R.; Dhar, V. Editorial—Big data, data science, and analytics: The opportunity and challenge for IS research. Inf. Syst. Res. 2014, 25, 443–448. [Google Scholar] [CrossRef]
  2. Chen, H.; Chiang, R.H.L.; Storey, V.C. Business intelligence and analytics: From big data to big impact. MIS Q. 2012, 36, 1165–1188. [Google Scholar] [CrossRef]
  3. Chiang, R.H.L.; Grover, V.; Liang, T.-P.; Zhang, D. Special issue: Strategic value of big data and business analytics. J. Manag. Inf. Syst. 2018, 35, 383–387. [Google Scholar] [CrossRef]
  4. Grover, V.; Chiang, R.H.L.; Liang, T.-P. Creating strategic business value from big data analytics: A research framework. J. Manag. Inf. Syst. 2018, 35, 388–423. [Google Scholar] [CrossRef]
  5. Abbasi, A.; Sarker, S.; Chiang, R.H.L. Big data research in information systems: Toward an inclusive research agenda. J. Assoc. Inf. Syst. 2016, 17, 3. [Google Scholar] [CrossRef]
  6. Chunpir, H.I.; Rathmann, T. An empirical evidence of barriers in a big data infrastructure. Interact. Comput. 2018, 30, 507–523. [Google Scholar] [CrossRef]
  7. Baesens, B.; Bapna, R.; Marsden, J.R. Transformational issues of big data and analytics in information systems: A research agenda. MIS Q. 2016, 40, 807–818. [Google Scholar] [CrossRef]
  8. Müller, O.; Fay, M.; vom Brocke, J. Utilizing big data analytics for information systems research: Challenges, promises and guidelines. Eur. J. Inf. Syst. 2016, 25, 289–302. [Google Scholar] [CrossRef]
  9. Yadranjiaghdam, B.; Pool, N.; Tabrizi, N. A survey on real-time big data analytics: Applications and tools. In Proceedings of the 2016 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 15–17 December 2016; IEEE: New York, NY, USA, 2016; pp. 404–409. [Google Scholar] [CrossRef]
  10. Kitchens, B.; Dobolyi, D.; Li, J.; Abbasi, A. Advanced customer analytics: Strategic value through integration of relationship-oriented big data. J. Manag. Inf. Syst. 2018, 35, 540–574. [Google Scholar] [CrossRef]
  11. Shmueli, G.; Koppius, O.R. Predictive analytics in information systems research. MIS Q. 2011, 35, 553–572. [Google Scholar] [CrossRef]
  12. Krämer, J.; Shekhar, S. Regulating digital platform ecosystems through data sharing and data siloing: Consequences for innovation and welfare. MIS Q. 2025, 49, 123–154. [Google Scholar] [CrossRef]
  13. Tremblay, M.C.; Kohli, R.; Rivero, C. Data is the new protein: How the commonwealth of Virginia built digital resilience muscle and rebounded from opioid and COVID shocks. MIS Q. 2023, 47, 423–450. [Google Scholar] [CrossRef]
  14. Mikalef, P.; Boura, M.; Lekakos, G.; Krogstie, J. Big data analytics and firm performance: Findings from a mixed-method approach. J. Bus. Res. 2019, 98, 261–276. [Google Scholar] [CrossRef]
  15. Wamba, S.F.; Gunasekaran, A.; Akter, S.; Ren, S.J.; Dubey, R.; Childe, S.J. Big data analytics and firm performance: Effects of dynamic capabilities. J. Bus. Res. 2017, 70, 356–365. [Google Scholar] [CrossRef]
  16. Lehrer, C.; Wieneke, A.; Jung, R. How big data analytics enables service innovation: A theoretical framework and empirical evidence. J. Manag. Inf. Syst. 2018, 35, 424–452. [Google Scholar] [CrossRef]
  17. Maroufkhani, P.; Wagner, R.; Wan Ismail, W.K.; Baroto, M.B.; Nourani, M. Big Data Analytics and Firm Performance: A Systematic Review. Information 2019, 10, 226. [Google Scholar] [CrossRef]
  18. Dubey, R.; Bryde, D.J.; Dwivedi, Y.K.; Graham, G.; Foropon, C. Impact of artificial intelligence-driven big data analytics culture on agility and resilience in humanitarian supply chain: A practice-based view. Int. J. Prod. Econ. 2022, 250, 108618. [Google Scholar] [CrossRef]
  19. Iftikhar, A.; Purvis, L.; Giannoccaro, I.; Wang, Y. The impact of supply chain complexities on supply chain resilience: The mediating effect of big data analytics. Prod. Plan. Control 2023, 34, 1562–1582. [Google Scholar] [CrossRef]
  20. Suoniemi, S.; Meyer-Waarden, L.; Munzel, A.; Zablah, A.R.; Straub, D.W. Big data and firm performance: The roles of market-directed capabilities and business strategy. Inf. Manag. 2020, 57, 103365. [Google Scholar] [CrossRef]
  21. Chen, D.Q.; Preston, D.S.; Swink, M. How the use of big data analytics affects value creation in supply chain management. J. Manag. Inf. Syst. 2015, 32, 4–39. [Google Scholar] [CrossRef]
  22. Duan, L.; Xiong, Y. Big data analytics and business analytics. J. Manag. Anal. 2015, 2, 1–21. [Google Scholar] [CrossRef]
  23. Gunasekaran, A.; Yusuf, Y.Y.; Adeleye, E.O.; Papadopoulos, T. Agile manufacturing practices: The role of big data and business analytics with multiple case studies. Int. J. Prod. Res. 2018, 56, 385–397. [Google Scholar] [CrossRef]
  24. Oncioiu, I.; Bunget, O.C.; Türkeș, M.C.; Căpușneanu, S.; Topor, D.I.; Tamaș, A.S.; Rakoș, I.-S.; Hint, M.Ș. The Impact of Big Data Analytics on Company Performance in Supply Chain Management. Sustainability 2019, 11, 4864. [Google Scholar] [CrossRef]
  25. Raghupathi, W.; Raghupathi, V. Big data analytics in healthcare: Promise and potential. Health Inf. Sci. Syst. 2014, 2, 3. [Google Scholar] [CrossRef]
  26. Pejić Bach, M.; Mihajlović, I.; Stanković, M.; Khawaja, S.; Qureshi, F.H. Determinants of intention to use of hospital information systems among healthcare professionals. Systems 2024, 12, 235. [Google Scholar] [CrossRef]
  27. Biswas, T.R.; Hossain, M.Z.; Comite, P.U. Role of management information systems in enhancing decision-making in large-scale organizations. Pac. J. Bus. Innov. Strateg. 2024, 1, 5–18. [Google Scholar] [CrossRef]
  28. Jabbar, A.; Akhtar, P.; Dani, S. Real-time big data processing for instantaneous marketing decisions: A problematization approach. Ind. Mark. Manag. 2020, 90, 558–569. [Google Scholar] [CrossRef]
  29. Schildt, H. Big data and organizational design: The brave new world of algorithmic management and computer augmented transparency. Innov. Manag. Policy Pract. 2017, 19, 23–30. [Google Scholar] [CrossRef]
  30. Gupta, P.; Sharma, A.; Jindal, R. Scalable machine-learning algorithms for big data analytics: A comprehensive review. WIREs Data Min. Knowl. Discov. 2016, 6, 194–214. [Google Scholar] [CrossRef]
  31. Ketter, W.; Schroer, K.; Valogianni, K. Information systems research for smart sustainable mobility: A framework and call for action. Inf. Syst. Res. 2023, 34, 1045–1065. [Google Scholar] [CrossRef]
  32. Vaičiutė, K.; Katinienė, A. Improving the information systems of a warehouse as a critical component of logistics: The case of Lithuanian logistics companies. Systems 2025, 13, 186. [Google Scholar] [CrossRef]
  33. Huynh, M.T.; Nippa, M.; Aichner, T. Big data analytics capabilities: Patchwork or progress? A systematic review of the status quo and implications for future research. Technol. Forecast. Soc. Change 2023, 197, 122884. [Google Scholar] [CrossRef]
  34. He, Y.; Huang, N.; Wang, L.; Sun, Y. Real-time sales data, streamer improvisation, and sales performance: Evidence from live stream selling. MIS Q. 2025, 49, 1567–1594. [Google Scholar] [CrossRef]
  35. Mach-Król, M. Conceptual framework for implementing temporal big data analytics in companies. Appl. Sci. 2022, 12, 12265. [Google Scholar] [CrossRef]
  36. Liu, Y.; Qiao, H.; Wang, J.; Jiang, Y. Influencing mechanism of the intellectual capability of big data analytics on the operational performance of enterprises. Heliyon 2024, 10, e25032. [Google Scholar] [CrossRef]
  37. Varmus, M.; Kubina, M.; Mičiak, M.; Šarlák, M. Integrated Sports Information Systems: Enhancing Data Processing and Information Provision for Sports in Slovakia. Systems 2024, 12, 198. [Google Scholar] [CrossRef]
  38. Abdel-Karim, B.M.; Pfeuffer, N.; Hinz, O. Machine learning in information systems—A bibliographic review and open research issues. Electron. Mark. 2021, 31, 643–670. [Google Scholar] [CrossRef]
  39. Tang, F.; Kawamoto, Y.; Kato, N.; Liu, J. Internet of intelligence: A survey on the enabling technologies, applications, and challenges. IEEE Commun. Surv. Tutor. 2022, 24, 1394–1434. [Google Scholar] [CrossRef]
  40. Storey, V.C.; Song, I.Y. Big data technologies and Management: What conceptual modeling can do. Data Knowl. Eng. 2017, 108, 50–67. [Google Scholar] [CrossRef]
  41. Olayinka, O.H. Big data integration and real-time analytics for enhancing operational efficiency and market responsiveness. Int. J. Sci. Res. Arch. 2021, 4, 280–296. [Google Scholar] [CrossRef]
  42. Sturm, T.; Gerlach, J.P.; Pumplun, L.; Mesbah, N.; Peters, F.; Tauchert, C.; Nan, N.; Buxmann, P. Coordinating human and machine learning for effective organizational learning. MIS Q. 2021, 45, 1581–1602. [Google Scholar] [CrossRef]
  43. Hossain, Q.; Yasmin, F.; Biswas, T.R.; Asha, N.B. Integration of big data analytics in management information systems for business intelligence. Saudi J. Bus. Manag. Stud. 2024, 9, 192–203. [Google Scholar] [CrossRef]
  44. Singh, S.; Darwish, T.K.; Potočnik, K. Measuring organizational performance: A case for subjective measures. Br. J. Manag. 2016, 27, 214–224. [Google Scholar] [CrossRef]
  45. Wall, T.D.; Michie, J.; Patterson, M.; Wood, S.J.; Sheehan, M.; Clegg, C.W.; West, M. On the validity of subjective measures of company performance. Pers. Psychol. 2004, 57, 95–118. [Google Scholar] [CrossRef]
  46. Chang, S.J.; Van Witteloostuijn, A.; Eden, L. Common method variance in international business research. In Research Methods in International Business; Palgrave Macmillan: Cham, Switzerland, 2019; pp. 385–398. [Google Scholar] [CrossRef]
  47. Dewi, G.A.K.R.S.; Sari, G.I.; Widiatmoko, R. Leveraging big data analytics to improve decision accuracy in business accounting. J. Hunan Univ. Nat. Sci. 2025, 52, 159–170. [Google Scholar] [CrossRef]
Figure 1. MIS Capability Comparison Across Traditional, BI-Enhanced, and AI/ML-Enabled Configurations.
Figure 1. MIS Capability Comparison Across Traditional, BI-Enhanced, and AI/ML-Enabled Configurations.
Systems 14 00216 g001
Figure 2. Model Performance Drift and Monitoring Triggers Over Time.
Figure 2. Model Performance Drift and Monitoring Triggers Over Time.
Systems 14 00216 g002
Table 1. Reference Architecture for Data-Driven, Real-Time MIS.
Table 1. Reference Architecture for Data-Driven, Real-Time MIS.
Data Layer
(Quality & Access)
Intelligence Layer
(AI/ML Services)
Decision Layer (Decision Design)Execution Layer
(Automation)
Feedback Layer (Learning Loop)
• ERP/CRM/HRIS
• Logs/IoT streams
• External data
[38]
• Predictive models
• Prescriptive models
• AutoML components
[39]
• Decision rules
• Constraints/policy
• Human checkpoints
[40]
• BPM/RPA/workflows
• Alerts/tickets
• Integrations/APIs
[41]
• Monitoring/drift
• Human feedback
• Retraining triggers
[42]
Table 2. Construct Reliability and Descriptive Statistics.
Table 2. Construct Reliability and Descriptive Statistics.
ConstructItemsCronbach’s αMeanSD
BDA Capability Maturity40.883.740.81
Real-Time Processing40.853.520.89
Data Infrastructure Readiness40.863.610.84
Perceived Org. Performance50.903.820.77
Table 3. Self-reported Efficiency Improvements from Big Data Analytics and Real-Time Processing by Industry (%).
Table 3. Self-reported Efficiency Improvements from Big Data Analytics and Real-Time Processing by Industry (%).
IndustryPerceived Improvement in EfficiencyKey Metric
Retail30%Reduction in stockouts
Healthcare15%Reduced hospital readmissions
Finance25%Faster loan approval and risk assessment
Table 4. Self-reported Business Impact Metrics from Big Data Analytics and Real-Time Processing by Industry (%).
Table 4. Self-reported Business Impact Metrics from Big Data Analytics and Real-Time Processing by Industry (%).
IndustrySelf-Reported Revenue Increase (%)Self-Reported Operational Cost Reduction (%)Self-Reported Customer Retention Increase (%)
Retail12%5%15%
Healthcare8%10%N/A
Finance10%3%N/A
Table 5. Themes Explaining How BDA and Real-Time Processing Improve MIS Outcomes.
Table 5. Themes Explaining How BDA and Real-Time Processing Improve MIS Outcomes.
ThemeDescription (Cross-Case Synthesis)Illustrative Excerpt (Anonymized)
Unifying streaming and batch pipelinesOrganizations reduced decision latency when they consolidated streaming ingestion with historical context in a common storage/compute layer, enabling consistent definitions, backfills, and KPI reconciliation.Retail MIS lead: “Real-time alerts helped only after we stopped maintaining two separate truth sets streaming and monthly reports.”
Data quality, lineage, and governed accessStakeholders emphasized that trust in automated decisions depended on validated inputs, lineage tracking, and access controls that matched compliance needs.Finance data engineer: “If we can’t explain where an alert came from, it’s not usable especially for fraud and risk.”
Automation coupling to workflowsPerformance gains materialized when model outputs triggered operational actions (tickets, approvals, throttling, replenishment) using agreed playbooks and system integrations.Healthcare operations manager: “Dashboards are useful, but value came when alerts opened a task with the next step already defined.”
Table 6. Inter-Construct Correlation Matrix (Cronbach’s α on diagonal).
Table 6. Inter-Construct Correlation Matrix (Cronbach’s α on diagonal).
BDARTPDIRPOP
BDA(0.88)0.62 **0.55 **0.68 **
RTP0.62 **(0.85)0.52 **0.64 **
DIR0.55 **0.52 **(0.86)0.58 **
POP0.68 **0.64 **0.58 **(0.90)
Note. ** p < 0.01. POP = Perceived Organizational Performance. Cronbach’s α values on diagonal in parentheses.
Table 9. Comparative Positioning Against Key BDA Capability–Performance Studies.
Table 9. Comparative Positioning Against Key BDA Capability–Performance Studies.
StudyMethodPredictorsR2/EffectDifferentiator
Wamba et al. [15]Survey + PLS-SEM (n = 297)BDA capability (aggregate), dynamic capabilitiesR2 = 0.21–0.47Single BDA construct; no RTP or DIR separation
Mikalef et al. [14]Mixed-method (survey n = 202 + cases)BDA capability, complementary resourcesR2 = 0.45No real-time processing; infrastructure subsumed under BDA
Grover et al. [4]Conceptual frameworkBDA value creation pathwaysN/A (conceptual)No empirical test; no quantitative validation
Suoniemi et al. [20]Survey + SEM (n = 301)BDA, market-directed capabilities, strategyR2 = 0.38No RTP; no qualitative mechanism evidence
This studyMixed-method (survey n = 150 + 12 interviews)BDA, RTP, DIR as separate predictorsR2 = 0.72 (Adj. 0.69)Three complementary predictors with incremental R2; qualitative mechanism identification
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Önden, A. A Systemic Approach to Decision Support and Automation: The Role of Big Data Analytics and Real-Time Processing in Management Information Systems. Systems 2026, 14, 216. https://doi.org/10.3390/systems14020216

AMA Style

Önden A. A Systemic Approach to Decision Support and Automation: The Role of Big Data Analytics and Real-Time Processing in Management Information Systems. Systems. 2026; 14(2):216. https://doi.org/10.3390/systems14020216

Chicago/Turabian Style

Önden, Abdullah. 2026. "A Systemic Approach to Decision Support and Automation: The Role of Big Data Analytics and Real-Time Processing in Management Information Systems" Systems 14, no. 2: 216. https://doi.org/10.3390/systems14020216

APA Style

Önden, A. (2026). A Systemic Approach to Decision Support and Automation: The Role of Big Data Analytics and Real-Time Processing in Management Information Systems. Systems, 14(2), 216. https://doi.org/10.3390/systems14020216

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop