Next Article in Journal
The Role of Solidarity Finance in Sustainable Local Development in Ecuador
Previous Article in Journal
Measuring the Digital Economy in Kazakhstan: From Global Indices to a Contextual Composite Index (IDED)
Previous Article in Special Issue
Consolidating the Role of AI in the Economy and Society: Combating the Deepfake Phenomenon Through Strategic and Normative Approaches—The Case of Romania in the EU Context
 
 
Article
Peer-Review Record

Macroeconomic and Labor Market Drivers of AI Adoption in Europe: A Machine Learning and Panel Data Approach

Economies 2025, 13(8), 226; https://doi.org/10.3390/economies13080226
by Carlo Drago 1, Alberto Costantiello 2, Marco Savorgnan 2 and Angelo Leogrande 2,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Reviewer 5: Anonymous
Economies 2025, 13(8), 226; https://doi.org/10.3390/economies13080226
Submission received: 8 June 2025 / Revised: 21 July 2025 / Accepted: 24 July 2025 / Published: 5 August 2025
(This article belongs to the Special Issue Digital Transformation in Europe: Economic and Policy Implications)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Your research offers avaluable contribution to understanding the macroeconomic determinants of AI adoption across EU member states. The use of both panel econometric models and machine learning techniques is innovative and strengthens the empirical analysis.

I believe several revisions are necessary to improve the quality and clarity of the paper before it can be considered as thefollows:

First: Major requirements:

  1. Include Unemployment or Labor Market Indicators:
    Given the strong link between AI adoption and labor market disruption, it is a major omission not to include the unemployment rate or related labor variables in your model. Including such data will enhance both the completeness and policy relevance of your analysis.

  2. Clarify the "Research Gap" within "Introduction" for the readers: Do not forget to include the "Unemployment Issue".
  3. Clarify Model Validation and Address Overfitting in KNN:
    The KNN model shows perfect predictive accuracy (R² = 1.0), which raises concerns about potential overfitting or data leakage. Please clarify your model validation procedure, particularly regarding test/train data splitting and whether cross validation was used.

Second: Minor Suggestions:

  • Ensure that all acronyms (e.g., ALOAI, DCPS, GFCF ...) are spelled out at first mention.

  • Consider slightly shortening and tightening the abstract to reduce redundancy and highlight the core findings more directly.

  • Provide smoother transitions between the regression analysis and machinelearning sections for better narrative flow.

Comments on the Quality of English Language

  1. While the paper is rich in content, many sentences are lengthy and complex. A careful linguistic review is strongly recommended to improve readability and ensure that your findings are communicated effectively to an international audience.

  2. Consider improving the formatting and clarity of your tables, and if possible, use visualizations (e.g., charts or graphs) to complement your comparative model performance section.

Author Response

Point to Point Answers to Reviewer 1

Your research offers a valuable contribution to understanding the macroeconomic determinants of AI adoption across EU member states. The use of both panel econometric models and machine learning techniques is innovative and strengthens the empirical analysis.

I believe several revisions are necessary to improve the quality and clarity of the paper before it can be considered as thefollows:

First: Major requirements:

Q1. Include Unemployment or Labor Market Indicators. Given the strong link between AI adoption and labor market disruption, it is a major omission not to include the unemployment rate or related labor variables in your model. Including such data will enhance both the completeness and policy relevance of your analysis.

A1. We have added the following paragraph

4.1 The Dynamics of Artificial Intelligence Uptake in Major EU Firms: A Panel Data Analysis

In the recent years, the adoption of artificial intelligence (AI) technologies by firms has been the target of wide-ranging debates about their implications for work markets. On the one hand, AI appears to have the potential to boost productivity, streamline business processes, and create new economic opportunities. On the contrary, it generates mounting concerns about the replacement of human work—particularly the work that is repetitive and requires low skills—due to automation and cognitive systems. Such a tension between the virtue of technological inventivity and the disruptive force of work opens the question of understanding how the nature of the work market affects the adoption of AI in a variety of national contexts. The model includes six explanatory variables which encompass salient aspects of the work market: the share of employers among total employment (EMPL), work in services (SERV), the share of self-employed workers (SELF), the unemployment rate (UNEM), the share of workers with vulnerable employments (VEMP), and the share of wage and salaried workers (WAGE). Such a model aims at clarifying the way the structure and the quality of the work market shape the capacity and the will of firms to adopt AI technologies. Under-standings of such inter-relations are, indeed, of central importance, not only as a means of interpretation of current trends of AI diffusion, but also as a guide for the design of public policy which promotes the digital transformation of the economy as well as inclusive work market evolution.

We directly estimated the following relationship. We applied three different econometric approaches—panel data with fixed effects, panel data with random effects, and dynamic panel data models—to capture this relationship. By utilising a multi-method design, we guarantee robustness of the estimates and are better equipped to capture how, over time and across countries, labor market circumstances influence adoption of AI.

Our empirical coverage extends to 28 European countries over the 2018-2023 interval, providing a wide overview of AI diffusion as it applies to labor market behavior in different national settings. We estimated the following equation:

 

Where i=28 and t=6.

 

The model presented aims to analyze the determinants of artificial intelligence (AI) adoption by large European enterprises. The dependent variable, ALOAI, represents the percentage of firms with at least 250 employees that use at least one AI technology, such as machine learning or image recognition. The regression equation includes six explanatory variables, all related to the structure and quality of the labor market: the share of employers in total employment (EMPL), employment in the service sector (SERV), the share of self-employed workers (SELF), the unemployment rate (UNEM), the percentage of workers in vulnerable employment (VEMP), and the proportion of wage and salaried workers (WAGE). The dataset consists of 28 cross-sectional units observed over six years, totaling 168 observations. Three different estimation techniques were used: a random-effects model (GLS), a fixed-effects model, and a dynamic panel model including the lagged dependent variable ALOAI.
The figures suggest strong consistency between the different specifications. EMPL is statistically significant and negative for the fixed- and random-effects models. That the share of the labor market occupied by employer-established firms need not encourage AI adoption comes as a surprise, but the finding may reflect the nature of these family-centric, relatively small business firms, which are often reluctant to use financial resources, preferring the classic business model (Hoffmann & Nurski, 2021). For the fixed as well as the random effects model, the coefficient comes up as −20,000, which reveals a qualitatively significant influence.
In contrast, service sector work (SERV) exerts a considerable and significant influence in a positive direction on AI adoption. This result aligns with the ongoing European economic digital transformation, whereby the service sector accounts for the leading provider of technological innovation (Gualandri & Kuzior, 2024). Sector segments like information technology, health, education, and financial services are getting highly integrated with AI-related applications. The considerable positive relationship reflects the larger need and implementation capability for AI technologies for such segments. Such a constructive influence strongly holds for all models, which vary the coefficient between about 1.1 and 2.9 with the specification.
Self (SELF) is another variable which shows a significant positive correlation with ALOAI, as the variable accounts for the percentage of self-employed individuals. The result shows that the greater the population of self-employed, the greater the AI adoption of big firms. That might reflect a dynamic and innovation-oriented entrepreneurial environment. An environment conducive to business startup, specifically the tech sector, might similarly have a positive effect on big firms through the diffusion of the innovations generated by start-ups and freelancers (Spagnuolo et al., 2025). Also, greater self-employment might reflect greater use of information and communication technologies, which in turn makes the adoption of AI technologies easier.
Another significant variable exhibiting a negative and statistically significant association with AI adoption for all the models is the unemployment rate (UNEM). This result bolsters the proposition that for the countries undergoing labor market difficulties, firms have a weaker capacity to innovate as well as adopt new technologies (Liu, 2024). Unemployment tends to go hand-in-hand with undesirable macroeconomic conditions, fewer funds available to firms, and lower need for firms to compete, all of which may discourage the adoption of AI solutions (Dave, 2024).
Similarly, the variable VEMP, which depicts the share of workers engaged in vulnerable employment, reveals a negative and significant coefficient for two of the three models. The economic and social relevance of the variable becomes important: significant labor market vulnerabilities often translate into informal work, precarious contracts, and inadequate social protection. In such a situation, companies are likely to have fewer formalized units and are reluctant to incur the substantial first costs of such advanced technologies as AI. They are likely to lack the skilled manpower required for effective adoption of such instruments (Du, 2024). Moreover, decentralized and precarious labor markets could reflect the wider structural vulnerabilities of the economy that hinder its innovative capabilities.
In contrast, the coefficient of WAGE, the proportion of wage and salaried workers, is positive and statistically significant in all models. That means that the wider the coverage of formal, secure work, the higher the probability that firms will employ AI technologies. This finding stresses the importance of a well-structured labor market as a precondition for technological innovation. Salaried work could also reflect the fact that firms are larger and better structured with access to the financial resources required for the long-term investment in technology. Those firms have, as a general rule, formal procedures along with a work force enjoying the benefits of labor protection, which are helpful when it comes to the adoption and implementation of AI systems.
In the dynamic model, the lagged dependent variable ALOAI(t−1) enters with a significant and very high coefficient (0.87) indicating strong temporal persistence of AI adoption. Firms adopt AI during a year with a very high likelihood of continuing and broadening it during subsequent years. The dynamic reflects a cumulative process, such that initial adoption produces subsequent learning, adaptation, and cumulative consolidation as time progresses. As comes through the significant dynamic effect, the presence of such a powerful dynamic effect highlights the importance of public policy instruments which compel firms to venture into such a cumulative process as, initiated, the process does seem self-sustaining.
More broadly, the analysis reveals that the labor market's structure and quality are central drivers of the adoption of AI technologies by firms. Economies with formal, stable, and service-oriented labor forces are better equipped for AI adoption. By contrast, conditions with wide-spread unemployment, precariousness of employment, and a greater prevalence of less formal types of employment have lower AI diffusion rates. Among the policy lessons, these have significant implications for the articulation of public policy design. Supporting labor formalization, investment in the services sector, enabling innovative entrepreneurship, and combating unemployment are all policy approaches that simultaneously strengthen the labor market and encourage the digital transformation of the economy.
Also, the strong dynamic effect obtained for the panel model shows that policy should not just try to stimulate first AI adoption, but should stimulate firms through their whole process of technological integration. Policies of such a kind could include the offering of training programs, of technical services, and improved access to funds for investment in digitization. By shaping the right innovation context and reducing the first adoption barriers, public policy can have a decisive influence in hastening the diffusion of AI to the main areas of the European economy.

 

 

Random-effects (GLS), using 168 observations, Included 28 cross-sectional units, Time-series length = 6, Dependent variable: ALOAI

1-step dynamic panel, using 112 observations, Included 28 cross-sectional units

Dependent variable: ALOAI

Fixed-effects, using 168 observations

Included 28 cross-sectional units

Time-series length = 6

Dependent variable: ALOAI

 

Coefficient

Std. Error

z

Coefficient

Std. Error

z

Coefficient

Std. Error

t-ratio

const

-2.90813e+06**

1.25624e+06

−2.315

 

 

 

−2.90686e+06**

1.24639e+06

−2.332

EMPL

−21746.0**

9643.96

−2.255

1546.57**

628.068

2.462

−19483.3**

9247.57

−2.107

SERV

1.29315***

0.248676

5.200

1.12751**

0.503887

2.238

2.91043***

0.473567

6.146

SELF

50825.2***

15637.4

3.250

3268.97***

1169.97

2.794

48549.4***

15337.3

3.165

UNEM

−1.02830**

0.429408

−2.395

−0.469765*

0.275918

−1.703

−1.34345***

0.467762

−2.872

VEMP

−21744.6**

9644.01

−2.255

1543.15**

627.003

2.461

−19483.8**

9247.72

−2.107

WAGE

29080.8**

12562.3

2.315

4812.97***

1780.80

2.703

29067.1**

12463.9

2.332

ALOAI(-1)

 

 

 

0.870151***

0.143614

6.059

 

 

 

Statistics

Mean dependent var

27.43071

Sum squared resid

2067.290

Mean dependent var

27.43071

Sum squared resid

24963.19

S.E. of regression

3.037923

Sum squared resid

4330.302

Log-likelihood

−658.4819

 

 

 

LSDV R-squared

0.903065

Schwarz criterion

1352.832

 

 

 

LSDV F(33, 134)

37.82925

rho

0.382722

 

 

 

Log-likelihood

−511.3337

S.D. dependent var

16.35535

 

 

 

Schwarz criterion

1196.882

S.E. of regression

12.41345

 

 

 

rho

0.382722

Akaike criterion

1330.964

 

 

 

S.D. dependent var

16.35535

Hannan-Quinn

1339.839

 

 

 

S.E. of regression

5.684689

Durbin-Watson

0.911341

 

 

 

Within R-squared

0.337644

 

 

 

 

 

P-value(F)

5.99e-53

 

 

 

 

 

Akaike criterion

1090.667

 

 

 

 

 

Hannan-Quinn

1133.774

 

 

 

 

 

Durbin-Watson

0.911341

Tests

'Between' variance = 119.865

'Within' variance = 32.3157

theta used for quasi-demeaning = 0.792632

Joint test on named regressors -

Asymptotic test statistic: Chi-square(6) = 69.5814

with p-value = 4.98247e-13

Number of instruments = 16

 

 

Joint test on named regressors -

Test statistic: F(6, 134) = 11.3847

with p-value = P(F(6, 134) > 11.3847) = 2.92012e-10

Breusch-Pagan test -

Null hypothesis: Variance of the unit-specific error = 0

Asymptotic test statistic: Chi-square(1) = 208.937

with p-value = 2.3432e-47

Test for AR(1) errors: z = -2.60382 [0.0092]

 

Test for AR(2) errors: z = -0.0806224 [0.9357]

 

Test for differing group intercepts -

Null hypothesis: The groups have a common intercept

Test statistic: F(27, 134) = 20.0034

with p-value = P(F(27, 134) > 20.0034) = 8.24638e-35

Hausman test -

Null hypothesis: GLS estimates are consistent

Asymptotic test statistic: Chi-square(6) = 23.5091

with p-value = 0.000642736

Sargan over-identification test: Chi-square(9) = 60.6043 [0.0000]

 

Wald (joint) test: Chi-square(7) = 228.346 [0.0000]

 

Statistical results and the tests of diagnostics of the three estimated models—random effects (GLS), fixed-effects (within estimator), and the one-step dynamic panel—provide a full picture of the robustness and validity of the relationship of the labor market variables and the adoption of AI by major European enterprises. Dependent variable for all models is ALOAI, which sets the percentage of major enterprises adopting at least one AI technology (Gualandri & Kuzior, 2024). Let us start with the random-effects model, we have the total observations of 168 for 28 cross-sectional units for a total of six years. Log-likelihood of the model is −658.48, while the Akaike and Schwarz information criteria are 1330.96 and 1352.83, respectively. Sum of the squared residuals is high (around 24,963) and the standard error of the regression is quite size-able at 12.41. They indicate that while the model explains a part of variation of AI adoption, the explanatory power of the model is a lot about the remaining variation (Popović et al., 2025). Between-group variance is 119.87, while the within-group variance is 32.31, revealing a lot of heterogeneity both across and over the periods. Estimated rho value, which shows the proportionate amount of variation as a result of individual-specific effects, comes out to be about 0.38. That means 38% of the total variation in the ALOAI is a result of differences of rather than the periods, that is, the countries. Est-results are better as per the explanatory power of the model. Within R-squared comes out to 0.3376, while the LSDV R-squared (least squares dummy variable) comes remarkably high at 0.9031, revealing that the model explains about more than 90% of the total variation of AI adoption when all the country-specific effects are controlled (Wagan & Sidra, 2024). Related F-statistic (F(33,134) = 37.82) comes out highly significant with a p-value nearly equal to zero, which reveals the joint impact of the regressors as well as the fixed effects statistically depends noticeably. Log-likelihood improves considerably while shifting to the fixed-effects model, going up to −511.33, while the Akaike and Schwarz criteria improve as 1090.67 and 1196.88, respectively.

These improvements imply better model fit than the random-effects specification. The standard error of regression is also reduced to 5.68, implying improved precision. But the Durbin-Watson statistic remains low at 0.911 for both models, indicating the possibility of the existence of the serial correlation of the residuals. For the dynamic panel model, which includes a lagged dependent, the sample size reduces to 112 observations as the first period gets lost during dynamic estimation. We have a very high co-efficient for the lagged dependent regressor ALOAI(-1) of 0.87, which has a statistical significance at the 1% level. This result confirms the existence of strong path dependency for AI adoption: the history of past AI adoption of a country strongly predicts the country continuing AI adoption in the following years (Spagnuolo et al., 2025). We have a Sargan over-identification test Chi-square statistic of 60.60 with a p-value of 0.0000, which re-jects the null hypothesis of valid over-identifying restraints. That could imply the existence of possible problems affecting the instruments used during the model. But the Arellano-Bond tests for serial correlation are reassuring. We have the AR(1) test significant (z = −2.60, p = 0.0092) as expected when having first differences, while the AR(2) test comes out as a result of no significance (z = −0.08, p = 0.9357) as a supporting result indicating the validity of the dynamic model, with the lack of the existence of the second- order. Several specification and robustness tests similarly inform the model choice. We have the strong rejection of the null hypothesis of zero variance for the random effects by the Breusch-Pagan test, with a Chi-square of 208.93 and a p-value effectively set as zero. We have the existence of significant unobserved heterogeneity proved to exist through the countries (Atajanov & Yi, 2023). But the Hausman test gives a decisive result arguing for the fixed effects model choice. We have the consistency of the GLS random effects estimator tested as com-pared to the fixed effects, which comes out with a Chi-square statistic of 23.51, a p-value of 0.0006, and the rejection of the null hypothesis that the random-effects estimator becomes consistent (Yum, 2022). We have the regressors violated the major assumption of the random-effects model correlating with the individual-specific effects.

The joint tests for the regressors also confirm the relevance of the model specifications accordingly. For the fixed-effects estimation, the F-test of all the regressors (F(6, 134) = 11.38) appears significant at a very high level, while the p-value comes to about 2.92e-10. For the Wald test in the case of the dynamic model, similarly, the joint significance of the regressors included appears to hold, while the Chi-square statistic comes to 228.35 as the p-value remains well below 0.01. These values certainly create no doubt about the explanatory power of the used regressors (Bustani et al., 2024). Additionally, the test of differing group intercepts in the case of the fixed-effects model strongly rejects the null of a common intercept common to all the countries. An F-statistic of 20.00 with a p-value effectively at zero reveals that country-specific factors are indeed significant and cannot go unnoticed, further confirming the validity of the applicability of the fixed-effects model. On the whole, the collectively indicated set of statistical tests and diagnostics tend towards the fixed-effects model as the best-specifying model for the characterization of AI adoption among European firms of size. It better fits, higher explanatory power, and satisfies several assumptions violated by the random-effects model. The dynamic panel model similarly gives significant insights, primarily about the persistence of the adoption of AI over the passage of time, though a slight caution must be exercised about the validity of the used instruments. Collectively, the estimates are statistically robust and have significant policy implications about the role played by the characteristics of the labor market while shaping the adoption of technology.

The econometric investigation of 28 European nations during the 2018–2023 period sheds light on the manner structural features of the labor market have strong impacts upon the adoption of artificial intelligence (AI) technologies by significant numbers of enterprises. Outcomes, robust for fixed effects, random effects, and dynamic panels, reflect a number of statistically significant associations. A higher share of employers relative to total employment reveals a negative association with AI adoption, which posits that economies prevailing in the predominance of scarce, frequently family-managed firms are the less probable adopters of highly advanced technologies. Employment in the services sector, conversely, reveals a robust positive association, which reflects the significant role played by services as drivers of the digital innovation impulse. Analogously, a higher percentage of self-employed persons reveals a positive association with AI adoption, possibly indicating a dynamic environment of entrepreneurship that optimally benefits the process of technological diffusion. Elevated percentages of unemployment and higher percentages of staff occupied in precarious work, conversely, indicate a negative association with AI uptake, which signals the limitations imposed by the volatility of the labor market upon innovation. A higher percentage of wage and salaried workers reveals a positive association with AI adoption, which reflects the highly formalized and structured labor markets are optimally better suited to the process of technological investment. The dynamic panel model, finally, reveals robust persistence of AI use throughout the observation period, as testified by the existence of a significant and higher coefficient of the lagged dependent variable, revealing that, when initiated, AI adoption widens and perseveres, possibly as a consequence of the investment undergone by the enterprise. Such a finding has policy implications. For the enhancement of the formalization of the labor market, combatting the rates of unemployment, facilitating innovation initiatives within the services sector, as well as the boosting of entrepreneurship can permit the healthier framework which prefers AI adoption. Additionally, public policy must not merely catalyze the entry of adoption, however, rather, stimulate enterprises through the complete process of their absorption of technology, though the provision of the information provision programs, as well as the provision of a better access to funds. General, the discussion reveals that the effective diffusion of AI does not just depend upon technological readiness, but rather upon inclusive and stable forms of labor market institutions as well.

Q2. Clarify the "Research Gap" within "Introduction" for the readers. Do not forget to include the "Unemployment Issue".

A2. We haved the following part in the introduction:

Research gap. Despite the skyrocketing rise of artificial intelligence (AI) scholarship, existing work has focused disproportionately on micro-level drivers such as firm-specific capabilities, innovation intensity, and digitally preparedness while leaving the macroeconomic environment as exogenous/ peripheral. Underexamined, therefore, is the effect of broader structural economic factors—especially the state of the labor market—in AI adoption by countries. That gap has heightened salience to the European Union case, as member states are extremely heterogenous both in their labor market as well as institutional preparedness. Unemployment, a profoundly significant yet overlooked dimension, namely, not merely predicts weak macroeconomic health but indeed constrains the investment ability as well as risk appetite of firms for new technology adoption. Delineating the relationship of higher unemployment with the other macro variables while inhibiting/facilitating AI uptake takes a central role for policy formulation. Filling the lacuna, the existing study examines the effect of national-level macroeconomic conditions— inclusive of unemployment—on the adoption of AI technologies by major firms among 28 EU countries during the 2018–2023 periods, through a mixed-method approach which includes the use of a panel of econometrics as well as machine learning.

 Q3.1 Clarify Model Validation and Address Overfitting in KNN. The KNN model shows perfect predictive accuracy (R² = 1.0), which raises concerns about potential overfitting or data leakage.

In reality, the data has been normalized, so we have modified the following text and also the caption of the table as follows:

This section performs a comparative analysis on eight regression models, i.e., Boosting, Decision Tree, K-Nearest Neighbors (KNN), Linear Regression, Neural Networks, Random Forest, Regularized Linear Regression, and Support Vector Machines (SVM). The models are evaluated on standardized performance metrics, e.g., Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE/MAD), Mean Absolute Percentage Error (MAPE), and the coefficient of determination (R²). The input data were all normalized prior to evaluation for unbiased and consistent comparisons across models. The objective is to explore each model's predictive capacity as well as generalizability in predicting AI uptake in large firms in the EU. Mainly accompanied by model benchmarking, the section also includes a study on KNN-based feature importance using mean dropout loss for ranking macroeconomic factors according to their contribution toward AI uptake. Such analyses offer both methodological insight as well as policy-relevant evidence on structural economic variables conditioning AI diffusion in different national contexts. The comparative results on the regression models are provided in Table 3 below.

Table 3. Performance Comparison of Regression Algorithms Based on Standard Evaluation Metrics. All input data were normalized to ensure comparability across models.

Metric

Boosting

Decision Tree

KNN

Linear Regression

Neural Network

Random Forest

Regularized Linear

SVM

MSE

0.187

0.31

0.000

0.23

1.000

0.293

0.293

0.214

RMSE

0.222

0.388

0.000

0.298

1.000

0.374

0.374

0.242

MAE / MAD

0.247

0.361

0.000

0.357

1.000

0.242

0.242

0.241

MAPE

0.100

0.107

0.000

0.477

0.658

0.750

0.750

1.000

0.650

0.370

1.000

0.510

0.000

0.841

0.841

0.248

Q3.2 Please clarify your model validation procedure, particularly regarding test/train data splitting and whether cross validation was used.

A3.2 We have made a comparison between the standard KNN model and the KNN model with cross validation in Appendix A. This appendix has been referred to within the text in the discussion of the performance characteristics of the KNN model at the algorithmic level.

Appendix A. Validation Strategies and Robustness of the KNN Model

To secure the robustness, accuracy, and generality for our prediction model, we applied two various validation procedures for the K-Nearest Neighbors (KNN) regression algorithm: a 20% holdout validation and a 5-fold cross-validation method. The methods were both for verifying the predicting capacity for the surge in artificial intelligence (AI) in the European region for large firms, from a list of macroeconomic variables that are multi-dimensional in nature. Normalized validation was performed, scaling on features taken to remove biases due to variation in the unit of measurement, a step that is desirable considering the Euclidean distance function in KNN (Table 1).

Table A1. Performance Comparison of K-Nearest Neighbors (KNN) Regression Using 20% Holdout and 5-Fold Cross-Validation.

 

Training and validation data

20% for validation data

k-Fold with 5 Folds

Weights

rectangular

rectangular

Distance

Euclidean

Euclidean

n(Train)

96

97

n(Validation)

25

24

n(Test)

30

30

Validation MSE

132.224

62.494

Test MSE

27.207

46.926

Validation set

Mean Squared Error

Mean Squared Error

MSE

39.096

46.926

MSE(scaled)

0.129

0.141

RMSE

6.253

6.85

MAE / MAD

4.888

4.67

MAPE

21.68%

67.23%

0.871

0.86

 

In the first technique, the standard 80/20 train-validation split was adopted. 96 observations in the full set were held back for training, 25 for validation, and a separate set of 30 was set aside for testing. The rectangular weight and Euclidean distance KNN model was trained without altering parameterization in validation. This holdout technique yielded good performance indicators, such as an increased value for R² as 0.871, an abbreviated value for RMSE as 6.253, and an MAPE value as 21.68%. The results suggest a good relationship between predicted output and actual output on the validation set, as well as good model fitting and generalizability to new test data (Test MSE = 27.207).

To ensure accuracy in such findings and reduce the likely impact of partitioning bias in data, we also adopted a 5-fold cross-validation method. The rigorous process, in such an instance, randomly divided the data into five folds, each equal-sized. For every repetition, four folds (n ≈ 97) were for training, while a single fold (n ≈ 24) was for validation, placing the validation fold in variation for every one in the set of five runs. The prediction in the model was then pooled across folds to offer shared performance statistics. The technique yielded an R² value of 0.86, an RMSE value of 6.85, and an MAPE value of 67.23%.

Although MAPE significantly rose in the cross-validation setting, the latter likely reflects responsiveness on the part of MAPE to outlier or extreme percentage errors, where in a few folds, actuals could be approaching zero. Cross-validated validation MSE (62.494) was significantly lower compared to that in holdout technique (132.224), which points toward increased consistency as well as lesser variation in the prediction made on the model on various splits in the data. Also, scaled MSE values (0.129 for holdout, 0.141 for cross-validation) reflect an increment marginally, which further indicates resistance on the model's part toward variation in train as well as in validation partitions.

Systematic results in both validation experiments indicate that the KNN model is capable of performing consistently for all the options for splitting configuration of the data. The rectangular weighting setting and Euclidean distance, in conjunction with normalised features, also aided in keeping the measurements of distance unbiased, so that local structure in the data could be picked up reliably by the model. The non-parametric nature of the KNN algorithm also made it well-suited for discovering non-linear relationships between macroeconomic indicators and AI uptake.

Compared to different baselines offered in the paper like Boosting, Random Forest, Support Vector Machines (SVM), and Neural Networks, KNN consistently performed competitively or even superior in important metrics. Such a finding, together with good interpretability as well as few assumptions, also attests further to KNN's methodologically sound and practically reliable predictive modeling tool in applications in macroeconomic research.

In total, both the 20% holdout and 5-fold cross-validation techniques verified the goodness and validity of the KNN model. The algorithm indicated good predictive ability, minimal generalization error, and maximum explanation power for both validation techniques. The findings are consistent in verifying the soundness of the model and provide evidence in support for the use of the model in policy-informed applications on IT change and penetration of AI in the macroeconomic context.

Second: Minor Suggestions:

Q4. Ensure that all acronyms (e.g., ALOAI, DCPS, GFCF ...) are spelled out at first mention.

A4. The acronyms are indicated in section 3 and summarised in Table 2 as follows:

Table 2. Variable, acronyms and sources of data.

Acronym

Variable

Definition

Source

ALOAI

AI adoption in major firms

 

This variable shows the percentage of large EU enterprises (250+ employees) using at least one AI technology. It excludes agriculture, mining, and finance sectors. Measured annually, it reflects AI adoption—such as machine learning or image recognition—across major industries, based on Eurostat.

EUROSTAT

HEAL

Current health expenditure (% of GDP)

This variable represents total public and private health spending as a share of gross domestic product, reflecting a country’s financial commitment to healthcare services, infrastructure, and policy.

WORLD BANK

DCPS

Domestic credit to private sector (% of GDP)

This variable measures financial resources provided to the private sector by financial institutions, expressed as a percentage of GDP, indicating access to credit and financial system development.

EXGS

Exports of goods and services (% of GDP)

This variable captures the total value of goods and services exported by a country, relative to its GDP, reflecting trade openness, external demand, and global economic integration.

GDPC

GDP per capita (constant 2015 US$)

This variable represents a country's gross domestic product divided by its population, adjusted for inflation to 2015 US dollars, reflecting average economic output and living standards over time.

GFCF

Gross fixed capital formation (% of GDP)

This variable measures investment in fixed assets such as buildings, machinery, and infrastructure, expressed as a percentage of GDP, indicating long-term economic growth potential and capital accumulation.

INFD

Inflation, GDP deflator (%)

This variable reflects the annual percentage change in the GDP deflator, capturing overall inflation by measuring price changes in all domestically produced goods and services within an economy.

TRAD

Trade (% of GDP)

This variable represents the sum of exports and imports of goods and services as a percentage of GDP, indicating a country's trade openness, economic integration, and global market exposure.

EMPL

Employers, total (% of em-ployment)

Represents the percentage of employed individuals who are employers, indicating entrepreneurial activity within the labor force. It reflects employment structure, business environment, and an economy’s potential for job creation, innovation, and growth. A higher EMPL suggests a stronger private sector and greater enterprise formation.

SERV

Employment in services (% of total employment)

Measures the percentage of the workforce employed in the service sector, reflecting economic structure and development. High values in advanced economies indicate industrialization, reduced agricultural employment, and progress toward a knowledge-based economy.

SELF

Self-employed (% of employment)

Indicates the share of employed individuals who are self-employed, including freelancers and business owners. It reflects entrepreneurship, informality, or lack of formal jobs, especially in developing countries, and helps assess labor market structure and policy needs.

UNEM

Unemployment, total (%)

Measures the percentage of the labor force that is unemployed but actively seeking work. High values indicate economic distress and weak labor demand, while low rates reflect stronger activity. It is crucial for assessing economic performance and shaping labor market policies.

VEMP

Vulnerable employment (%)

Measures the share of workers in insecure jobs, often without formal contracts or social protection. Common in developing economies, it signals informality, labor market instability, and limited access to decent work, guiding inclusive employment and social protection policies.

WAGE

Wage and salaried workers (%)

Indicates the share of the workforce in salaried employment with formal contracts, regulated hours, and social protection. High values reflect a structured labor market typical of advanced economies, while lower values suggest informality and employment precarity. It is key for assessing labor quality and development.

 

Q5. Consider slightly shortening and tightening the abstract to reduce redundancy and highlight the core findings more directly.

A5. We have rewritten the abstract as follows:

Abstract: This paper identifies the macroeconomic and labor market factors behind the uptake of artificial intelligence (AI) in the European Union's (EU's) larger firms. Using panel data econometrics and machine learning techniques, the paper estimates the impact on the share of firms making use of at least one AI technology (ALOAI) by variables such as expenditure on health, domestic credit, export, gross capital formation, inflation, openness to trade, and labor market configuration. The statistics are for all 28 member countries in the EU, spanning 2018-2023. Fixed effects, random effects, dynamic panel data, clustering, and supervised learning are jointly utilized in an effort aimed at ensuring robustness. Our findings indicate that AI diffusion is positively associated with GDP per capita, health expenditures, inflation, and trade liberalization, but negatively related to domestic credit, exports, and gross capital formation. Well-organized labor markets, consisting of higher proportions of salaried employment, service jobs, and self-employment, are positively related to AI diffusion. Unemployment and vulnerable employment are, in turn, negatively related. Cluster analysis distinguishes groups of countries that exhibit comparable diffusion behavior, typically exhibiting sounder foundations in institutions and economics. The findings indicate AI diffusion to be subject not only to investment capacity and technological preparedness, but to all-inclusive macroeconomic frameworks and inclusive labor institutions. Selective policy intervention can foster inclusive AI uptake across the EU's industrial base.

Keywords: Artificial Intelligence Adoption, Macroeconomic Determinants, Labor Market Structure, Panel Data Analysis, Machine Learning Models.

JEL CODES: O33, E24, C23, J24, O52.

We have also changed the title as follows “Macroeconomic and Labor Market Drivers of AI Adoption in the European Union: A Machine Learning and Panel Data Approach”

Q6. Provide smoother transitions between the regression analysis and machine learning sections for better narrative flow.

A6. The following transitional paragraph has been introduced within section 5:

To offer a complement to inferences derived on the basis of panel data regression estimates, the subsequent analysis shifts the focus to a machine learning orientation. While econometric techniques offer causal interpretation subject to exogenously specified assumptions and information about the average effects of macroeconomic and labor market variables on AI uptake, they are at times limited in modeling compound, nonlinear associations and interactions between predictors. Machine learning algorithms, in contrast, seek maximal prediction accuracy and can discover latent structures in data without subjecting them to strong parametric constraints. The use of supervised learning algorithms such as K-Nearest Neighbors (KNN), Random Forest, and Support Vector Machines (SVM) makes for a robustness check against the findings based on the econometrics, as well as alternative insight about which variables matter and about generalizability. The shift to machine learning methods then complements the empirical strategy, as well as making for both methodological triangulation and further findings pertinent to policy.

Comments on the Quality of English Language

Q7. While the paper is rich in content, many sentences are lengthy and complex. A careful linguistic review is strongly recommended to improve readability and ensure that your findings are communicated effectively to an international audience.

A7. We are also improving the quality of the article from a linguistic point of view.

Q8. Consider improving the formatting and clarity of your tables, and if possible, use visualizations (e.g., charts or graphs) to complement your comparative model performance section.

A8. We have substituted table 3,4, 6 with the following images.  

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

Review of the article Driving AI Adoption in the EU: A Quantitative Analysis of Macroeconomic Influences

Given the rapid development and transformational potential of artificial intelligence technologies, their effective implementation is becoming a key factor in competitiveness and economic growth. Understanding the macroeconomic factors that influence this process is particularly relevant for the formation of sound policy. The article “Driving AI Adoption in the EU: A Quantitative Analysis of Macroeconomic Influences” is extremely relevant and timely. The innovative approach used by the authors, combining econometric analysis of panel data and machine learning methods, has revealed the complex interaction of economic, institutional, and structural factors that determine the level of AI adoption by large enterprises in the European Union. The results provide a solid foundation for the development of strategic public policies aimed at accelerating digital transformation and effectively utilizing the potential of AI. The article makes a significant contribution to the scientific discussion on the determinants of technological adaptation and sustainable digital development.

While noting the thoroughness of the study, some parts of the manuscript need to be revised.

  1. Notes on the section “Introduction”

1.1. In scientific research, the introduction section should clearly outline the relevance and issues of the study. However, the authors have overloaded it with information. In particular, lines 34-43 are a review of the literature and should be moved to section 2, and the methodology and research methods presented in lines 57-104 should be reflected in section 3, which is devoted to the methodological tools of the research.

1.2. The authors should clearly formulate the purpose of the study. The sentence “This paper was needed to fill that research gap ...” (lines 54-56), which reflects the purpose of the study, is stylistically incorrect because it is formulated in a colloquial rather than scientific style.

1.3. There are no clearly formulated research objectives, which makes it difficult to determine the completeness of their implementation and the degree of achievement of the purpose.

1.4. In line 76, there is a repetition of words – “although although”, which requires careful checking of the text for stylistic and spelling errors.

  1. Notes to the section “Literature review”

2.1. The literature review should contain a critical analysis and emphasize the significance of the authors' research. In lines 139-211, the authors provide a large list of articles devoted to various aspects of AI, without critical analysis and highlighting unresolved problems that the research aims to address. It is worth conducting a critical analysis and emphasizing the “gaps” that your research addresses. That is, expand the information in Table 1, where this attempt is made, and remove unnecessary information.

2.2. Attention should be paid to the relevance of the publications referenced by the authors in this section. In particular, the articles from 2015 and 2017 are outdated given the speed of development of information technology.

2.3. In general, the language is appropriate for an academic style. However, phrases such as “brings about wage polarization” (line 146), “raise specter of threats” (line 156), and “provide comfort” (line 151) should be avoided. Instead, it is advisable to use more formal and accurate academic synonyms. For example, “contributes to wage polarization,” “raises concerns about,” “suggests/indicates.”

2.4. It is advisable to carefully check for grammatical errors, typos, and punctuation inaccuracies that may give the impression of carelessness (for example, “macro economic theory” (line 139) instead of “macroeconomic theory,” as well as “post-busk economies” (line 161) — perhaps this should be “post-bust economies” or other terminology?).

  1. Notes to the section “A Methodologically Integrated Approach to Analyzing AI Adoption: Panel Econometrics Meets Machine Learning”

3.1. As mentioned above, the information from the introduction should be moved to this section. In addition, to improve the readers' perception of the information, it is advisable to structure the section with subheadings devoted to panel models, machine learning models, and cluster analysis.

3.2. The authors explain in detail the feasibility of using panel models and machine learning models. However, it would be useful to justify in more detail the specific choice of each type of machine learning model (KNN, Random Forest, Boosting, SVM). Why these and not others (e.g., neural networks)? What are their advantages and potential disadvantages for the study?

3.3. Regarding cluster analysis (Hierarchical, Density-Based, Neighborhood-Based), it is advisable to specify which algorithms will be used (e.g., K-Means, DBSCAN, Agglomerative Clustering). How will the optimal number of clusters be determined?

3.4. The authors should justify why large enterprises (250+ employees) were chosen for the study. In addition, the adequacy of the sample size (151 observations) needs to be substantiated to ensure the statistical reliability, representativeness, and validation of the research results.

  1. 4. Notes to the section “Understanding AI Diffusion in EU Enterprises: Evidence from Fixed and Random Effects Models”

4.1. The authors analyze data from countries such as Austria, Belgium, Bosnia and Herzegovina, Bulgaria, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, and Turkey. Their selection should be further justified. Bosnia and Herzegovina, Norway, Serbia, and Turkey are not members of the EU. At the same time, Croatia, Cyprus, and the Czech Republic, which are members of the EU, are missing from the list. This is a critical observation that requires changing the title of the section and editing the introduction and title of the article, or changing the list of countries included in the study.

4.2. The formula presented in lines 290-291 should be numbered.

4.3. The conclusions of the Hausman test should be interpreted more clearly. Although the p-value (0.328) indicates that the null hypothesis (GLS estimates are consistent) is not rejected, this does not mean that random effects are better. It only indicates the absence of a systematic difference between the estimates, not that the FE are incorrect. Given the conclusions about the significance of the F-test for group intercepts and the Brosch-Pagan test, which indicate the presence of heterogeneity, preference should still be given to the fixed effects model, as it allows for the control of unobserved time-constant characteristics of countries, which is key to avoiding biased estimates. This nuance should be clearly explained in the text, rather than just presented as a statistical result.

  1. 5. Notes to the section “Decoding AI Adoption in the EU: A Comparative Evaluation of Predictive Models and Macroeconomic Drivers”

5.1. It is advisable to detail the specifics of each of the three selected clustering methods. In particular, it should be noted what type of agglomeration (linkage) was used for hierarchical clustering (e.g., Ward, complete, average), what parameters were used for DBSCAN (eps, min_samples), what parameters were used for “Neighborhood-Based” (if it is K-Means, then the number of iterations, initialization).

5.2. The authors should more clearly define the relationship between the obtained clusters and the actual indicators of AI implementation.

5.3. The conclusion made by the authors in lines 664-666 is general in wording and not substantiated. This statement should be argued or reworded.

  1. 6. Notes on the “Conclusions” section

General conclusions need to be specified. The authors should focus on the results obtained and emphasize the scientific and practical value of the study.

  1. Authors should also add a “Discussion” section, where they should provide information about existing limitations and outline prospects and directions for future scientific research.

CONCLUSION: In summary, the article is a valuable contribution to understanding the relationships between key macroeconomic indicators and the level of AI implementation. However, to improve the scientific quality of the work, it would be advisable to revise a number of points raised in the review. The most critical comment is to review the list of countries selected for the study. Several of them are not EU member states, which requires changing the title of the article or adjusting the countries to be studied.

 

Comments on the Quality of English Language

The study uses colloquial language. However, the article should be written exclusively in a scientific style.

Author Response

POINT TO POINT ANSWERS TO REVIEWER 2

Given the rapid development and transformational potential of artificial intelligence technologies, their effective implementation is becoming a key factor in competitiveness and economic growth. Understanding the macroeconomic factors that influence this process is particularly relevant for the formation of sound policy. The article “Driving AI Adoption in the EU: A Quantitative Analysis of Macroeconomic Influences” is extremely relevant and timely. The innovative approach used by the authors, combining econometric analysis of panel data and machine learning methods, has revealed the complex interaction of economic, institutional, and structural factors that determine the level of AI adoption by large enterprises in the European Union. The results provide a solid foundation for the development of strategic public policies aimed at accelerating digital transformation and effectively utilizing the potential of AI. The article makes a significant contribution to the scientific discussion on the determinants of technological adaptation and sustainable digital development.

While noting the thoroughness of the study, some parts of the manuscript need to be revised.

  1. Notes on the section “Introduction”

A1.1. In scientific research, the introduction section should clearly outline the relevance and issues of the study. However, the authors have overloaded it with information. In particular, lines 34-43 are a review of the literature and should be moved to section 2, and the methodology and research methods presented in lines 57-104 should be reflected in section 3, which is devoted to the methodological tools of the research.

Q1.1 We have added the quotation in lines 34-43 within the literature review as follows:

Artificial intelligence (AI) is transforming macroeconomic theory both positively, in productivity growth, and negatively, as policy challenge. The literature testifies to convergence on consensus on AI's restructuring impact on growth, labor markets, inequality, and inflation. Abrardi, Cambini, and Rondi (2019) classify AI as a generalized-purpose technology subject to sector spillovers and pinpoint issues in institutions and capital. Acemoglu (2025) provides macro equilibrium between automation substitution in and productivity growth, and strain is evident in Autor (2022), as they condense that AI employment is largely skill-biased and induces wage polarization. Agrawal, Gans, and Goldfarb (2019) describe AI as a prediction device and connect AI to productivity gains in industries. Albanesi et al. (2023) argue employment effects in Europe remain biased, as tech uptake remains failing in delivering lost classical employment. Aldasoro et al. (2024) provide evidence AI expands output, but moderately reduces inflation—an insightful commentary on monetary policy. For developmental economies, Aromolaran et al. (2024) remark AI investment is poverty-curing when diffuse. On the micro side, Babina et al. (2024) identify innovation paths triggered in AI but sound a note of concern on market concentration threat for AI-enabled firms poor in adjustability. Finally, Bickley et al. (2022) describe AI is transforming research for the economy itself. As identified in Hoffmann and Nurski (2021), much of the literature is still firm-level in orientation, even excluding the macroeconomic and policy structures hampering AI diffusion, most notably in the EU. Similarly, Gualandri and Kuzior (2024) establish enterprise-level uptake is conceptualized in isolation from deeper structural causes. Within these, AI's macroeconomic impacts are not neutral nor intrinsic; they are, in their polarity, institution- and policy-sensitive. As Acemoglu indicates, macroeconomists must conceptualize AI neither as an exogenous shock nor as a static technological device, but as a dynamic, policy-dependent force in post-busk economies.

We have also moved the lines 57-104 in the section 3. The new section 3 is as follows:

Methodologically, our paper utilises an integrated empirical approach consisting of fixed and random effects panel data models together with supervised machine learning and cluster algorithms in the assessment for AI adoptability in macroeconomic lenses. Given the panel structure of the data set—the multi-annual and multi-country coverage—the utilised approach makes for effective treatment for both cross-sectional as well as dynamic heterogeneity. Fixed effects models are adopted for the treatment for non-observable, time-invariant country characteristics, for example, settings for institutions, for regulatory systems, as well as for attitudes toward innovation. Random effects models, in contrast, allow for higher efficiency under the exogeneity between regressors' as well as country effects assumption. The appropriate specification for the various approaches is ascertained through the use of the Hausman test comparing both approaches, thus promoting internal validity (Popović et al., 2025).

To complement these econometric tools, the inclusion of machine algorithms, i.e., K-Nearest Neighbors (KNN), Support Vector Machines (SVM), Random Forest, and Boosting, introduces a non-parametric element capable of relating to complex, non-linear associations often overlooked in traditional linear modeling (Tapeh & Naser, 2023). The use is particularly applicable in prediction processes in which precision and responsiveness in high-dimension settings are central. Comparison metrics such as MSE, RMSE, MAE, MAPE, and R² are utilized in model comparison in an attempt for an algorithm best performing in scenarios in which data is plentiful, which is particularly pertinent in applications such as cybersecurity and modeling economic policy (Ozkan-Okay et al., 2024).

To further deepen the analysis, in their Hierarchical, Density-Based, and Neighborhood-Based forms,unsupervised cluster analyses are used for identifying latent structures between EU member states sharing identical macroeconomic properties as well as AI uptake trajectories. The dimension is important for identifying latent diversity as well as for policy comparative studies (Shokouhifar et al., 2024). The rationale for the method for such convergence is strategic in nature: panel regression provides causal interpretability, machine learning provides predictive accuracy, and clustering identifies structural typologies. Such convergence, for example, can be demonstrated in financial services studies for identifying thematic groups under topic modeling, which would be otherwise non-perceptible under mainstream methods (Olasiuk et al., 2023).

Simultaneous application of such methods together achieves analytically complementary ends—explanation, prediction, classification—and allows multi-dimensionality in treating AI absorption in economic and institutional environments. This is consistent with mainstream AI policy scholarship threads in advocating for holistic, multi-method ways for constructing evidence-informed public policy decisions (Popescu et al., 2024).

At the substantive level, machine-based global intelligence innovation indices provide empirical corroboration for the model's ability to forecast (Ma et al., 2023), while policy simulations based on AI for their part succeeded in delineating macroeconomic threats in the nature of energy dependence as well as fossil fuel trajectories (Tudor et al., 2025). Cluster analyses, further, determine diversified EU countries' macro profiles, such as vulnerability to inflation shocks (Czeczeli et al., 2024), fiscal policy shifts (Andrejovská & Andrejkovičova, 2024), as well as AI preparedness as well as flows gaps in human capital (Iuga & Socol, 2024). The findings provide corroboration for the paper's underlying line of argumentation that AI uptake can't be seen as firm-level choice nor investment potential, but as an outcome involving national policy convergence, macroeconomic composition, as well as institutional capacity.

A1.2. The authors should clearly formulate the purpose of the study. The sentence “This paper was needed to fill that research gap ...” (lines 54-56), which reflects the purpose of the study, is stylistically incorrect because it is formulated in a colloquial rather than scientific style.

Q1.2 we rewrote the part relating to the research gap as follows:

Despite the skyrocketing rise of artificial intelligence (AI) scholarship, existing work has focused disproportionately on micro-level drivers such as firm-specific capabilities, innovation intensity, and digitally preparedness while leaving the macroeconomic environment as exogenous/peripheral. Underexamined, therefore, is the effect of broader structural economic factors—especially the state of the labor market—in AI adoption by counties. That gap has heightened salience to the European Union case, as member states are extremely heterogeneous both in their labor market as well as institutional preparedness. Unemployment, a profoundly significant yet overlooked dimension, namely, not merely predicts weak macroeconomic health but indeed constrains the investment ability as well as risk appetite of firms for new technology adoption. Delineating the relationship of higher unemployment with the other macro variables while inhibiting/facilitating AI up-take takes a central role for policy formulation. Filling the gap, the existing study examines the effect of national-level macroeconomic conditions—inclusive of unemployment—on the adoption of AI technologies by major firms among 28 EU countries during the 2018–2023 periods, through a mixed-method approach which includes the use of a panel of econometrics as well as machine learning.

 A1.3. There are no clearly formulated research objectives, which makes it difficult to determine the completeness of their implementation and the degree of achievement of the purpose.

Q1.3 We have rewritten the research objectives within the introduction as follows:

To address growing concern about structural deter-minants for the surge in artificial intelligence (AI), the current study puts forth three well-organized objectives for research, clearly outlined in the resubmitted paper. The paper, in the initial instance, aims to estimate, in an empirical way, the relationship between significant macroeconomic variables and labor market variables (such as GDP per capita, domestic credit, spending on health, inflation, open-ness to trade, and composition of employ-ment) and AI tech adoption in the EU member states among larger-sized firms (250+ employees). The aim recognizes the reality that most scholarship spotlights firm-level variables, but, in reality, institutions and macroeconomic variables are no less crucial for determining technological diffusion.

Second, the paper evaluates the predictive, stability, and generalizability properties for various econometric (panel fixed and random effects models) as well as ML algorithms (like K-Nearest Neighbors, Random Forest, SVM). The two-methods approach is capable of making a complete comparison between models on both explanation and prediction grounds, and is able to detect nonlinear structures and high-dimensional interactions which are usually ignored by mainstream methods.

Third, the paper makes an effort in identifying distinct groups for EU countries based on macroeconomic characteristics as well as AI uptake levels using the application of unsupervised ma-chine learning techniques. The typology reflects that the same economic indicators could lead to different uptake outcomes depending on institutional readiness in addition to policy implementation.

These objectives, together, form a comprehensible research analytic frame-work, which not only defines the study but also makes the inquiry in the study complete and clear. The compatibility between objectives and methodological frame-work provides clarity, rigor, as well as utility for both policy as well as academic audiences to the study

 A1.4. In line 76, there is a repetition of words – “although although”, which requires careful checking of the text for stylistic and spelling errors.

Q1.4 The repetition has been removed.

  1. Notes to the section “Literature review”

A2.1. The literature review should contain a critical analysis and emphasize the significance of the authors' research. In lines 139-211, the authors provide a large list of articles devoted to various aspects of AI, without critical analysis and highlighting unresolved problems that the research aims to address. It is worth conducting a critical analysis and emphasizing the “gaps” that your research addresses. That is, expand the information in Table 1, where this attempt is made, and remove unnecessary information.

 The literature review should contain a critical analysis and emphasize the significance of the authors' research. the authors provide a large list of articles devoted to various aspects of AI, without critical analysis and highlighting unresolved problems that the research aims to address. It is worth conducting a critical analysis and emphasizing the “gaps” that your research addresses. That is, expand the information in the table, where this attempt is made, and remove unnecessary information.

Q2.1 We revised the literature as follows:

Artificial Intelligence (AI) is not only regarded as a technological breakthrough but as a macroeconomic shock with important implications. It is reshaping productivity spaces, altering labor markets, furthering widening inequalities in incomes, and presenting new challenges in inflation targeting as well as in macroeconomic governance. However, even as the scholarly literature labels AI as transformative, systematic linking of its diffusion to broader macroeconomic frameworks is still lacking—in the European Union (EU), in particular. That is exactly where present study intervenes.

Abrardi, Cambini, and Rondi (2019) pave the way by posing AI as a general-purpose technology with spillovers in the broader economy, as well as institutions and capital barriers able to delay diffusion. Acemoglu (2025) complements such a viewpoint through formalising the macroeconomic automation-driven substitution-productivity improvement trade-off in such a way that AI's long-run economic effects are conditional on institutions. Autor (2022) support the latter in presenting AI employment dynamics as biased in favouring high-skill employees, which causes skill-biased technological change and amplifies wage polarization.

But much such research remains macro-theoretical and not empirically supported in diverse institutional contexts. Agrawal, Gans, and Goldfarb (2019) describe AI as a “prediction engine” central to industrial productivity, but their work is not based on empirical connection to national-level enablers like access to credit or public investment in innovation. Albanesi et al. (2023), for Europe, are cautious even when high tech aspirations are in place, traditional kinds of employment shed as a result of automation are not well compensated in new ICT jobs—the suggestion being aggregate gains coming from AI are not well shared and structurally conditioned.

Aldasoro et al. (2024) provide some monetary policy ideas, such as AI stimulating output while constraining inflation, but cross-country difference in such an effect is underexplored. Moreover, in developmental contexts, Aromolaran et al. (2024) prove AI investment reduces poverty when diffusion is inclusive and broad-based. That conditionality illuminates the role of diffusion mechanisms, which are typically excluded in texts focusing on AI’s payoff. At the firm level, Babina et al. (2024) document AI can spur innovation but market concentration threatens small firms lacking the absorption capacity to transform. Bickley et al. (2022) mention AI is revolutionizing the economics practice itself, but lack of a harmonized technique inhibits generalizability in such claims.

Such dissimilar works, though thematically substantial, point to a crucial omission: not many reconcile macroeconomic configuration with empirical modeling of AI diffusion. As aptly commented on by Hoffmann and Nurski (2021) and Gualandri and Kuzior (2024), the firm-level focus in the prevailing scholarship overlooks the role that macro-level policy and national institutions play in shaping diffusion patterns, not least in heterogeneous EU economies. The omission is considerable because EU members significantly differ in their digital preparedness, infrastructure, credit markets, and labor market inflexibility.

Bonab, Rudko, and Bellini (2021) make a normative case for “anticipatory regulation” for the avoidance of AI-promoted growth in inequality. Moreover, Bonfiglioli et al. (2023) link AI adoption in U.S. commuting regions to polarization in cognitive labor, which includes regional disparity—an element perhaps evident in Europe. Brynjolfsson and Unger (2023) present AI as structurally transformative, but Brynjolfsson, Rock, and Syverson (2018) note a “productivity paradox,” as in which potential benefits are not taken because of under-measurement and late diffusion. Brynjolfsson, Li, and Raymond (2023) note that generative AI would raise productivity for low-skilled labor, but diffusion remains patchy. The suggestion is that policy and institutional setting are important mediators between AI's macroeconomic effects. On the behavioral front, Camerer (2018) puts forward the possibilities that algorithmic choice could transform macroeconomic behavior itself. Such results are, however, largely conceptual and lack empirical support. Chen et al. (2024) project various global effects for AI, highlighting infrastructure and institutional absorptive capacity differentials. Cockburn et al. (2018, 2019) indicate that AI's returns in R&D would become even more concentrated, perpetuating innovation inequality. Such warnings, however, lack country-level adoption profiles, and therefore provide little guidance for policy.

Comunale and Manera (2024) caution that policy changes for change in technology are late, which can further enhance frictions in labor markets, but Czarnitzki et al. (2023) show that productivity gains from AI vary greatly in knowledge firms. Eloundou et al. (2023), Ernst et al. (2019), and Felten et al. (2018) emphasize task automation and reskilling for labor but are silent about macroeconomic preconditions for smooth transition. Gazzani and Natoli (2024) advocate for inclusive growth using augmentative AI, but structural enablers for such inclusiveness are not fully elaborated.

Thus, the current body of research is thematically well-grounded but structurally incomplete. Although theory recognizes AI's macro-level implications, no systematic, cross-country empirical research linking AI adoption rates to macroeconomic fundamentals such as trade openness, access to domestic credit, or health spending—the very essence of economic resilience and innovation preparedness—bridges the gap. The current study tries to bridge this fundamental empirical gap.

By means of a dual-method approach—the econometric modeling (fixed and random effects), and the machine learning algorithms (e.g., K-Nearest Neighbors, clustering)—the paper explores which national-level macroeconomic characteristics and institutional configurations explain AI adoption in EU large firms. With such an endeavor, it responds to demands for methodological innovation and policy-driven empirical scholarship, such as those raised by Ruiz-Real et al. (2021), Szczepanski (2019), Trabelsi (2024), Varian (2018), Wagner (2020), Wang et al. (2021, 2025), Webb (2019), Wolff et al. (2020), and Zekos (2021).

Such, for example, is the case for AI-driven non-linear macro-behavior as requiring high institutional capacity. That AI can exacerbate the offshoring of thinking work, endangering middle-skill occupations, is shown by Webb (2019). That AI's productivity advantage in healthcare is contingent on public belief and governments' capacities, the latter a macro-institutional variable, is related by Wolff et al. (2020). That setting AI's net social benefits against future public damages requires global cooperation is pointed out by Zekos (2021). Yet such comments are in isolation and seldom embedded in consistent, comparative analyses.

Our study makes a typological contribution in cluster analysis of EU member countries, which shows that structurally similar economies can exhibit different AI adoption paths due to institutional mismatch. The paper further provides predictive modeling for estimating diffusion in alternative policy futures—a factor not present in previous scholarship. Finally, in our study, we are not merely descriptive but rather diagnostic and prescriptive, which not only aids theory building but also policy guidance. Lastly, the worth of such research lies in connecting theory and empirics, micro and macro, and even technology and policy. The paper bridges a significant gap in AI scholarship in portraying an institutionally grounded, cross-country research on EU adoption dynamics—in advancing knowledge on the EU's economic, structural, as well as policy drivers behind digital change.

We have also changed accordingly the table that synthetize the literature: 

Macro Theme

Key Findings from Existing Literature

Representative Authors

Limitations in Existing Research

Contribution and Originality of This Study

Growth & Productivity

AI is a general-purpose technology with potential to enhance productivity but requires institutional support and complementary investments.

Agrawal et al. (2019); Brynjolfsson & Unger (2023)

Often focus on theoretical modeling or sector-specific illustrations; lack empirical cross-country analysis on macro conditions enabling growth.

Applies KNN clustering to macroeconomic indicators in EU countries, identifying structural enablers of AI-linked productivity. Provides empirical assessment of macro factors enabling or limiting growth.

Labor Markets & Inequality

AI adoption contributes to wage polarization, skill-biased employment, and labor displacement. Reskilling is essential to mitigate inequality.

Acemoglu (2025); Autor (2022); Eloundou et al. (2023)

Primarily micro-focused or U.S.-centric; limited cross-national comparisons; inadequate integration of labor market structure in AI adoption frameworks.

Provides cluster-based evidence of labor market conditions shaping AI adoption in EU. Reveals structural labor gaps (e.g., vulnerable employment) influencing unequal diffusion across regions.

Inflation & Monetary Policy

AI adoption has marginal effects on inflation; central banks must adapt policies to new productivity regimes.

Aldasoro et al. (2024); Gazzani & Natoli (2024)

Underexplored in empirical studies; no systematic inclusion of inflation as a factor in AI diffusion models.

Incorporates inflation and price stability as predictors in AI adoption modeling, showing unexpected positive correlations. Brings macroeconomic variables into forecasting AI uptake.

Institutional & Policy Context

Effective AI adoption requires coherent regulation, anticipatory governance, and strategic public coordination.

Pehlivan (2024); Bonab et al. (2021); Wagner (2020); Hoffmann & Nurski (2021)

Sparse integration of macro-policy and AI adoption; neglect of EU digital heterogeneity and institutional readiness.

Links EU Digital Decade targets and AI policy tools with predictive modeling. Evaluates institutional performance across country clusters, bridging governance theory and empirical forecasting.

Sectoral Disruption

AI reshapes industrial structure, shifts labor demand, and transforms GDP composition across sectors, especially services and manufacturing.

Webb (2019); Wolff et al. (2020); Felten et al. (2018)

Fragmented treatment across sectors; minimal macro-level clustering to reveal structural disruption patterns in EU industry.

Uses unsupervised clustering to identify sector-specific adoption profiles. Highlights divergence in capital allocation, services sector readiness, and AI-enabled economic transformation across EU regions.

Firm-Level Innovation

AI supports innovation in data-intensive firms but can exacerbate concentration and reduce market diversity.

Cockburn et al. (2019); Czarnitzki et al. (2023); Babina et al. (2024)

Focus remains on firms rather than how national macro-structural factors condition firm-level innovation.

Bridges micro-macro gap by connecting firm innovation tendencies to national-level conditions. Demonstrates how digital readiness and access to credit foster innovation across different macro clusters.

Global Development & Digital Divide

AI could deepen global and regional inequality unless inclusive strategies and digital infrastructure are prioritized.

Trabelsi (2024); Wang et al. (2025); Zekos (2021); Gualandri & Kuzior (2024)

Generic policy suggestions dominate; lacks empirical stratification of digital readiness and structural inequality within advanced regions such as the EU.

Constructs EU-specific digital adoption clusters. Identifies lagging regions with structural economic constraints, highlighting digital policy misalignments and regional divergence risks.

 

Artificial intelligence is not just a technological innovation but a transformative macroeconomic force subject to national institutions, policy, and structural conditions. While existing work hints at AI's productivity value and risk factors—in such areas as inequality and labor polarization—the bulk are theoretical, firm-level, or U.S.-centric. Understudied in AI adoption are crucial macroeconomic factors like trade openness, credit access, and expenditure on health. Our study fills the gap in the inclusion of panel econometrics and machine learning for an AI diffusion study in EU countries. Our study determines structural and institutional factors, which provide a novel, evidence-based typology for inclusive and adaptive AI policy formation.

 A2.2. Attention should be paid to the relevance of the publications referenced by the authors in this section. In particular, the articles from 2015 and 2017 are outdated given the speed of development of information technology.

Q2.2 Articles from 2015-2017 have been deleted.

A2.3. In general, the language is appropriate for an academic style. However, phrases such as “brings about wage polarization” (line 146), “raise specter of threats” (line 156), and “provide comfort” (line 151) should be avoided. Instead, it is advisable to use more formal and accurate academic synonyms. For example, “contributes to wage polarization,” “raises concerns about,” “suggests/indicates.”

Q2.3 The indicated sentences have been removed.

A2.4. It is advisable to carefully check for grammatical errors, typos, and punctuation inaccuracies that may give the impression of carelessness (for example, “macro economic theory” (line 139) instead of “macroeconomic theory,” as well as “post-busk economies” (line 161) — perhaps this should be “post-bust economies” or other terminology?).

Q2.4. The indicated expressions have been removed.

  1. Notes to the section “A Methodologically Integrated Approach to Analyzing AI Adoption: Panel Econometrics Meets Machine Learning”

A3.1. As mentioned above, the information from the introduction should be moved to this section. In addition, to improve the readers' perception of the information, it is advisable to structure the section with subheadings devoted to panel models, machine learning models, and cluster analysis.

Q3.1 We have added the following paragraph:

 For systematically de-linking structural and institutional determinants of EU AI adoption, this study employs an integrated empirical framework. It complements panel econometric approaches with supervised machine learning processes along with unsupervised clustering algorithms in pursuit of explanatory robustness, predictive accuracy, as well as typological differentiation. These paths work in synergistic combination in order to produce a multi-dimensional view of AI diffusion across EU member states.

Panel Econometric Models. Given the multi-country, multi-years nature of the dataset (2018–2023, 28 EU countries), panel data models attempt to account for both the cross-sectional as well as time-series dimensions, thereby capturing dynamic heterogeneity as well as unobservable country-level impacts. Fixed effect models perform especially well in controlling unobserved, time-invariant heterogeneity across countries, such as those due to variations in institutional environments, regulation, or culture perceptions of technological innovations, thereby allowing a more sophisticated causal interpretation of AI adoption's macroeconomic determinants. In contrast, random effects models assume orthogonality between the regressors as well as unobserved country-specific impacts, with resulting more efficient estimates in case this assumption is true. Choice between these two designs is aided by the Hausman test, investigating consistency of estimators, with ancillary evidence coming in specification diagnostics such as Breusch–Pagan test as well as F-tests (Popović et al., 2025). Application of these models determines statistically significant as well as robust associations between AI adoption (ALOAI) as well as a variety of macroeconomic indicators, with positive associations with health expenditure, GDP per head, openness of trade as well as with inflation. At the same time, it identifies in some instances unexpectedly adverse associations with domestic credit extended towards the private sector as well as with gross fixed capital formation, indicative of potential inefficiencies or structural mismatches in financial as well as capital resources being deployed in the context of some countries.

Supervised Machine Learning Algorithms. As a complement to econometric studies, eight supervised machine learning algorithms have technically as well as scientifically been compared: Boosting, Decision Tree, K-Nearest Neighbors (KNN), Linear Regression, Neural Networks, Random Forest, Regularized Linear Regression, as well as Support Vector Machines (SVM). These models have been trained using normalized data as well as tested using standard indicators of predictive performance, including MSE, RMSE, MAE, MAPE, as well as R² (Tapeh & Naser, 2023; Ozkan-Okay et al., 2024). Among them, KNN emerged as best-performing algorithm with near-zero prediction error as well as full explanatory power (R² = 1.000). In order to determine robustness of these results as well as reduce concerns of overfitting, a cross-validation exercise has been conducted as detailed in Appendix A. Moreover, dropout analysis using KNN as framework revealed domestic credit towards the private sector as well as GDP per capita as well as expenditure on health as most significant AI adoption drivers, revealing significant roles of internal financial perspectives as well as institutional capacity vis-à-vis external trade openness in determining diffusion of tech.

Unsupervised Clustering Analysis.  In order to explore latent typologies of AI diffusion in EU nations, a whole range of unsupervised learning schemes was employed, including Density-Based, Fuzzy C-Means, Hierarchical, Model-Based, Neighborhood-Based, as well as Random Forest clustering protocols. These procedures have been applied using macroeconomic as well as institutional indicators in order to disclose structurally similar groupings of countries with diverse AI adoption trajectories (Shokouhifar et al., 2024). Inferences derived from such clusters yield meaningful structures in countries' responses towards macroeconomic pressures—the likes of inflation shocks (Czeczeli et al., 2024), fiscal policy changes (Andrejovská & Andrejkovičova, 2024), as well as digital labor preparedness gaps (Iuga & Socol, 2024)—thereby enabling further specialized comparative research as well as evidence-based policy design. Such structural convergence research utilizing such clustering-based method reinforces multi-method research complementarity, merging causal inference employing panel data models, predictive power employing machine learning, as well as typological insight utilizing clustering. Such juxtaposition is particularly relevant in spheres such as financial institutions as well as cybersecurity, in which thematic segmentation as well as strategic differentiation signify most (Olasiuk et al., 2023), as well as being in alignment with prevailing research needs in multisectoral empirical underpinnings in AI policy researches (Popescu et al., 2024).

Combined methodological design of panel data econometric models, supervised machine learning models, as well as unsupervised clustering algorithms has some notable strengths in examining AI adoption in EU countries. Firstly, econometric panel data offers stringent causal interpretability with control of both cross-sectional as well as temporal variations with capacity of controlling unobservable, time-invariant country-specific factors such as institutional quality or innovational attitude of culture. It enhances internal validity as well as isolates macroeconomic determinants of AI diffusion with methodological refinement. Secondly, complementation of econometric models with supervised machine learning models such as Boosting, KNN, SVM, as well as Random Forest enables modeling of complex, non-linear associations overlooked with traditional linear models. With standardized performance indicators such as MSE, RMSE, MAE, MAPE, as well as R², it offers sound comparison of algorithmic performance. KNN emerged as top-performing of models, with validity tested using cross-validation (Appendix A), with resultant greater confidence in inferences with control of overfitting. Thirdly, dropout analysis in the KNN framework allowed variable importance insights with domestic credit, GDP per head, as well as expenditure on health as most internal drivers of AI adoption—highlighting financial as well as institutional preparedness at the expense of external trade-based drivers. Fourthly, unsupervised clustering algorithms such as Density-Based, Fuzzy C-Means, Hierarchical, Model-Based, as well as Neighborhood-Based algorithms allow AI diffusion typology identification of latent structures in AI diffusion using structural agglomeration of countries with similar structures with diverging adoption patterns. These clusters act as rich sources of inputs towards focused policy interventions with highlighting of response of countries at this level towards inflationary pressures, fiscal duress, as well as vacancies in labor preparedness. Overall, this multi-methodology is tractable with causal description, prediction modeling, as well as structural classification under one empirical structure. It is especially insightful as beacon towards evidence-informed digital as well as innovation policy, as well as is in alignment with current research calls towards multi-methods in AI governance scholarship research (Shokouhifar et al., 2024; Popescu et al., 2024; Olasiuk et al., 2023).

A3.2. The authors explain in detail the feasibility of using panel models and machine learning models. However, it would be useful to justify in more detail the specific choice of each type of machine learning model (KNN, Random Forest, Boosting, SVM). Why these and not others (e.g., neural networks)? What are their advantages and potential disadvantages for the study?

Q3.2. We have added the following sentences within the section 5:

Our selection of machine learning algorithms in this work—the K-Nearest Neighbors (KNN), Random Forest, Boosting, Support Vector Machines (SVM), Decision Tree, Linear Regression, Regularized Linear Regression, and Neural Networks—was an intentional attempt at methodological diversity, predictive power, as well as adherence with this dataset's character as well as this work's objectives. These models have been selected in this work because each of them specific achievements in handling non-linearities, high-dimensional data, as well as with varied levels of interpretability (Assis et al., 2025; Khan et al.,  2020; Sutanto et al., 2024; Walters, Ortega-Martorell et al., 2022). KNN was in controversy because of its non-parametric method of viewing local patterns of difficult, high-dimensional data. With a propensity of being selection-sensitive as well as prone to possible overfitting, we counterbalanced such drawbacks with procedures of cross-validation in Annex A. Random Forest was in controversy because of its propensity of ensemble learning, reduction of variance because of pooling hundreds of decision trees besides increased predictive accuracy irrespective of example of multicollinearity or noisy data, albeit at the cost of interpretability in some way (Sutanto et al., 2024; Assis et al., 2025). A Gradient Boosting algorithm has been utilized because of continuous focus on correction of errors of preceding models with possible high precision as well as sensitivness in differentiation of subtle data patterns, though with requirement of subtle fine tuning in order not to be affected by overfitting as well as computationally expensivness (Walters et al., 2022). SVMs have been utilized because of in high-dimensional spaces as well as resistance towards allowance of presence of outliers(Assis et al., 2025; Khan et al., 2020). Decision Trees constitute a transparent baseline model simple enough in order to visualize as well as interpret the data of the model (Sutanto et al., 2024). Linear Regression as well as Regularized Linear Regression (e.g., Ridge, Lasso) constituted baselines in order to have an indication with regards to something larger models can be gauged, with regularization procedures being applied in order to prevent overfitting as well as multicollinearity (Khan et al., 2020). Neural Networks raised some interest in order to explore probable capacities of modeling extremely non-linear associations, although being computationally intensive as well as opaque (Khan et al., 2020; Walters et al., 2022). This heterogenous set as a whole allows for comparability of performance trade-offs between simplicity vs. complexity, interpretability vs. accuracy, facilitating a richer, more robust, policy-relevant modeling of AI adoption determinants.

A3.3. Regarding cluster analysis (Hierarchical, Density-Based, Neighborhood-Based), it is advisable to specify which algorithms will be used (e.g., K-Means, DBSCAN, Agglomerative Clustering). How will the optimal number of clusters be determined?

Q3.3 Lines 988-990 indicate the algorithms used for the clustering analysis, namely:  “Density-Based, Fuzzy C-Means, Hierarchical, Model-Based, Neighborhood-Based, and Random Forest clustering”.

To indicate how the optimal number of clusters was determined we have added the following propositions in section 5 with the indication of the Elbow Method graph.

The Elbow method is used to determine the optimal number of clusters. The optimal number of clusters is 7, as shown in the figure below:

3.4. The authors should justify why large enterprises (250+ employees) were chosen for the study. In addition, the adequacy of the sample size (151 observations) needs to be substantiated to ensure the statistical reliability, representativeness, and validation of the research results.

 The variable chosen to investigate the adoption of artificial intelligence takes into account the adoption of AI in large European companies. This choice is based on both practical reasons and theoretical and economic policy issues. In theory, such enterprises possess refined organizational structures, significant economic means, as well as refined capacities of managing—the primary requirements of absorbing as well as utilizing refined technologies such as artificial intelligence (AI). Large enterprises possess significantly greater chances of possessing qualified manpower, refined infrastructures of technology, as well as economic means of efficiently utilizing AI in production as well as control systems ((Ardito et al., 2024; (Oldemeyer et al., 2025). SMEs possess structural disadvantages of limited digitalization, limited financial means, as well as technical as well as strategic capacity shortages, due to which they possess limited potential of utilizing AI in systematic as well as scale-based manners (Zavodna et al., 2024; Kukreja, 2025). Large enterprises, in any case, possess not only larger potential of innovative capacities, but they possess specific potential of clear productivity as well as capacity gains in operations, as well as gains in improving their commercial offer towards customers as well as stakeholders. Apart from this, adoption in such enterprises possesses multiplier gains across value chains, with indirect impulses towards SMEs towards innovations as well (Ardito et al., 2024). Systematically, with 151 observations, in 28 EU nations across six years (2018–2023), is sufficient in order to be statistically robust. High R² of models (0.924 in fixed effects), as well as constancy of results in econometric as well as in machine learning models, confirms validity as well as method's representativeness applied.

  1. Notes to the section “Understanding AI Diffusion in EU Enterprises: Evidence from Fixed and Random Effects Models”

A4.1. The authors analyze data from countries such as Austria, Belgium, Bosnia and Herzegovina, Bulgaria, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, and Turkey. Their selection should be further justified. Bosnia and Herzegovina, Norway, Serbia, and Turkey are not members of the EU. At the same time, Croatia, Cyprus, and the Czech Republic, which are members of the EU, are missing from the list. This is a critical observation that requires changing the title of the section and editing the introduction and title of the article, or changing the list of countries included in the study.

Q4.1 The acronym EU and the expression European Union have been replaced with Europe or European.

A4.2. The formula presented in lines 290-291 should be numbered.

Q4.2 The formulas in the article have been numbered.

A4.3. The conclusions of the Hausman test should be interpreted more clearly. Although the p-value (0.328) indicates that the null hypothesis (GLS estimates are consistent) is not rejected, this does not mean that random effects are better. It only indicates the absence of a systematic difference between the estimates, not that the FE are incorrect. Given the conclusions about the significance of the F-test for group intercepts and the Brosch-Pagan test, which indicate the presence of heterogeneity, preference should still be given to the fixed effects model, as it allows for the control of unobserved time-constant characteristics of countries, which is key to avoiding biased estimates. This nuance should be clearly explained in the text, rather than just presented as a statistical result.

Q4.3 We have added the following sentences within the section 4

Statistically, the fixed effects (FE) specifications demonstrate significant explanatory power, with an R-squared of 0.924 as well as a significantly high F-statistic (F = 41.706), indicative of the fact that the regressors considered explain a large proportion of variance in the dependent variable. Both random effects (RE) models yield statistically significant estimates as well (Chi² = 75.88, p < 0.00001). There does not exist any systematic distinction identified between both of the two models estimators, however, as per the Hausman test (Chi² = 8.057, p = 0.328), such as would suggest inconsistent RE estimates as being valid under test assumptions. Clarification is necessary, though, of non-rejection of null in the Hausman test does not imply that RE is superior or invalidate FE as a specification. It only suggests there is no statistical distinction in coefficient estimates between both of the two methodological approaches. FE selection cannot thus simply base itself on the Hausman result but on a broader statistical as well as theory-based appraisal. In this regard, whilst in no way questioned in this research study, there is substantial diagnostic evidence in favour of FE. Firstly, there is conclusive evidence of unobserved heterogeneity with both F-test of group intercepts F = 17.36, p ≈ 0.00 as well as of substantial heteroskedasticity using Breusch-Pagan test of heteroscedasticity Chi² = 158.842, p ≈ 0.00. Both of these suggest support towards FE due to its superior adaptability in controlling unobserved, time-invariant at the country-level characteristics such as institutional environments, structural features as well as long-run socio-economic changes—that, unless controlled, can lead towards biased inferences. Despite moderate autocorrelation being hinted at using Durbin-Watson statistic (~0.59), it cannot invalidate the estimates as per significance or instability. Both theory-based argument as well as diagnostic evidence thus suggest support towards using the fixed effects approach. There shall be no uncertainty in the final text that such a choice is not one of test results individually but one of an integrated appraisal in accordance with data structure as well as research objectives.

  1. Notes to the section “Decoding AI Adoption in the EU: A Comparative Evaluation of Predictive Models and Macroeconomic Drivers”

A5.1. It is advisable to detail the specifics of each of the three selected clustering methods. In particular, it should be noted what type of agglomeration (linkage) was used for hierarchical clustering (e.g., Ward, complete, average), what parameters were used for DBSCAN (eps, min_samples), what parameters were used for “Neighborhood-Based” (if it is K-Means, then the number of iterations, initialization).

Q5.1 Appendix B was introduced to provide a representation of the hyperparameters of clustering algorithms.

 Appendix B. Hyperparameter Settings and Evaluation of Clustering Techniques

For clustering algorithms in the domain of unsupervised machine learning, correctly selected hyperparameters need to yield relevant reproducible results. These hyperparameters are extremely sensitive both to algorithmic structure as well as data nature, controlling everything from cluster shape and density through convergence to interpretability. Tables B1-B6 offer systematic tabular summaries of hyperparameter settings across six diverse clustering methods: Density-Based (DBSCAN), Fuzzy C-Means, Hierarchical, Model-Based, Neighborhood-Based (K-Means), and Random Forest Clustering. Each table specifies the crucial operational parameters--distance measures, iteration limits, initialization schemes, cluster determination plans--that regulate each method's behavior. Common feature of any procedures is feature scaling, such that variables contribute proportionally in distance-based computations. Selection of cluster number is further improved with Bayesian Information Criterion (BIC), towards objectivity as well as towards parsimonious modelling. Some methods, such as Random Forest or Hierarchical Clustering, have largely deterministic nature, while others have stochastic components, although most of such configurations with no fixed definite random seed imply no strict reproducibility is imposed. In this section, a critical commentary is provided of such hyperparameters, with discussion of suitability, potential disadvantages, as well as implications in quality as well as in robustness of clustering.

Table B1. Hyperparameters of Density Based Clustering

Parameter

Value

Description

Epsilon neighborhood size

2

Radius (ε) used to define the neighborhood around a point.

Min. core points

5

Minimum number of points within ε to define a core point.

Distance

Normal

Distance metric used (likely Euclidean).

Scale features

Enabled

Features are scaled before training.

Set seed

Disabled

No random seed is set; results may vary on different runs.

Table B1 includes some of the most prominent of these hyperparameters as they would be used with the density-based approach of clustering, in this case, specifically referring to the DBSCAN (Density-Based Spatial Clustering of Applications with Noise) approach. A choice of epsilon neighborhood of size 2 with a minimum core of 5 is in agreement with general employment of DBSCAN, insofar as a trade-off between being sensitive to noise points and being able to identify well-separated dense clusters must be struck. These parameters set the core structure of the clustering approach: ε sets the radius in which points will be considered neighbors, while the value of min_samples (provided herein with a value of 5) only allows thoroughly dense regions of space to be considered clusters. With "Normal" as distance measure it means there is an standard Euclidean measure of distance being utilized, although it would be beneficial to define this term as this will vary across platforms. It is noteworthy feature scaling is enabled, something one would desire in DBSCAN when features have considerably differing ranges. Without scaling, it will most likely bias this measure of distance due to features with larger ranges, consequently affecting neighborhoods being calculated. Absence of a hardcoded random seed implies that while DBSCAN is primarily deterministic, initializations or preprocessing with stochastic operations can cause slight variability of output between one run of a program and another. This will have little effect on DBSCAN itself, but allowing control of seeds will allow greater reproducibility.

Table B2. Hyperparameters of Fuzzy C-Means Clustering

Parameter

Value

Description

Max. iterations

25

Maximum number of iterations for the optimization process.

Fuzziness parameter

2

Fuzziness coefficient used in fuzzy clustering (e.g., Fuzzy C-Means).

Scale features

Enabled

Input features are scaled prior to clustering.

Set seed

Disabled

No specific random seed is set.

Cluster determination

Optimized (BIC)

Number of clusters is automatically determined using Bayesian Information Criterion (BIC).

Max. clusters

10

Maximum number of clusters considered during automatic optimization.

Fixed cluster number

Disabled

The number of clusters is not fixed but selected based on the optimization criterion.

Table B2 provides a concise information of Fuzzy C-Means (FCM) cluster algorithm's allocation of hyperparameters. Fuzziness is allotted 2, a general research default value of fuzzy clustering, and it specifies just how much membership in between clusters overlaps. Fuzziness of 2 allows a balanced degree of fuzz in cluster estimates, such that units can exist in a number of clusters partially, as is FCM's main strength vis-a-vis hard clustering algorithms like K-Means. The highest number of iterations, 25, is an adequate computation threshold at convergence, although in certain data complexity contexts, convergence will require a larger threshold. Feature scaling is enabled, as it is required in FCM, because the algorithm is based on distance and may be distorted with values of unequal magnitudes. Especially, cluster determination is automated using Bayesian Information Criterion (BIC), with 10 clusters maximum considered. This allows one to select a model balancing between fitness as well as complexity in trying not to overfit. Do not select cluster number but optimize on BIC brings in robustness as well as objectivity in clustering, especially in exploratory data analysis. Finally, because the random seed is uninitialized, it would subtly affect initialization as well as convergence behavior, although typically marginally in FCM. However, it would permit control of a seed, facilitating reproducibility, in specific cases with stable output of clustering across executions as a condition.

Table B3. Hyperparameters of Hierarchical Clustering

Parameter

Value

Description

Distance

Euclidean

Distance metric used to compute dissimilarity between data points.

Linkage

Average

Linkage method used in hierarchical clustering (average distance between clusters).

Scale features

Enabled

Input features are normalized or standardized before clustering.

Set seed

Disabled

No fixed random seed is applied.

Cluster determination

Optimized (BIC)

Number of clusters is automatically selected using Bayesian Information Criterion (BIC).

Max. clusters

10

Maximum number of clusters considered during automatic model selection.

Fixed cluster number

Disabled

The number of clusters is not manually set; it's determined through optimization.

Table B3 defines configuration parameters in the hierarchical clustering procedure with emphasis on automated selection of models. In computing distances between data points, we utilize the Euclidean distance measure, as in standard hierarchical clustering, and appropriate for continuous, scaled data. We select the average linkage method in computing the average of all pairs of points in two clusters; it yields well-balanced cluster patterns, not sensitive to chaining, with moderate sensitivity towards points with extreme values. It is a suitable selection in circumventing the compact cluster bias due to complete or Ward linkage. Notably, feature scaling is turned on, such that all of the variables contribute proportionally in exactly the same way to the distance metric. This step is important when features differ in scale or units, since raw features can skew clustering output by overwhelming distance calculations. The model includes automatic cluster number determination, using the Bayesian Information Criterion (BIC). This adds a statistically driven cluster selection layer of complexity vs. fitting in a cluster model. The cluster number is capped at a maximum of 10 as a means of capping cluster model search space and reducing risk of overfitting in high-dimensionality data. The random seed is not fixed but variable, but this is less of a problem in agglomerative hierarchical clustering, as it is deterministic with a fixed distance matrix. Reproducibility can be enhanced, though, if there is any random step involved (e.g., sampling of features or initialisation in pre-processing).

Table B4. Hyperparameters of Model Based Clustering

Parameter

Value

Description

Model

Auto

The clustering model is selected automatically by the system.

Max. iterations

25

Maximum number of iterations allowed during the training process.

Scale features

Enabled

Features are scaled before training (e.g., normalized or standardized).

Set seed

Disabled

No fixed random seed is set.

Cluster determination

Optimized (BIC)

The number of clusters is selected automatically using the Bayesian Information Criterion (BIC).

Max. clusters

10

Maximum number of clusters considered during model selection.

Fixed cluster number

Disabled

The number of clusters is not set manually but determined through optimization.

Table B4 shows the hyperparameter settings of the model-based clustering method. Model selection is 'Auto', allowing automatic selection of the most appropriate model in agreement with data. It is an adaptive setting, further allowing increased flexibility in the process along with enhancing clustering output, in scenarios of unknown structure of data present below. 25 is set as the maximum number of iterations, as is traditional with the Expectation-Maximization (EM) algorithm used in model-based clustering. This suffices under most conditions but may inhibit convergence in larger, complex datasets. This is a computer power versus model precision trade-off. Scaling of features is enabled, a necessary step because model-based clustering commonly assumes normal distributions of the features with equal contribution. Without scaling, larger range features may have excessive influences on the covariance structure as well as skewness of cluster assignment. Cluster identification is carried out using Bayesian Information Criterion (BIC) optimization, with as many as 10 clusters considered. BIC is well received in model selection, balancing between fit and simplicity of a model. This is automated in order to remove any subjective bias as well as enhance reproducibility of the solutions in clustering. Although it does not define a random seed, thereby enabling small variations due to stochastic initialisations of EM procedures, this is a minor constraint.

Table B5. Hyperparameters of Neighborhood-Based Clustering.

Parameter

Value

Description

Center type

Means

Specifies that cluster centers are calculated as means (K-Means clustering).

Algorithm

Hartigan-Wong

The specific algorithm used for K-Means optimization.

Distance

Euclidean

Distance metric used to compute dissimilarity between points.

Max. iterations

25

Maximum number of iterations for convergence.

Random sets

25

Number of random initializations (starting configurations).

Scale features

Enabled

Input features are scaled (normalized or standardized) prior to clustering.

Set seed

Disabled

No random seed is set for reproducibility.

Cluster determination

Optimized (BIC)

Number of clusters is selected automatically using Bayesian Information Criterion (BIC).

Max. clusters

10

Maximum number of clusters to consider during model selection.

Fixed cluster number

Disabled

The number of clusters is not fixed manually.

Table B5 illustrates the setting of the hyperparameter of the neighborhood-based clustering approach, here surely of K-Means clustering, as centers of means of one sort verify. As is widely known, the Hartigan-Wong approach—a commonly used method of optimizing within-cluster variability—is in practice quick and efficient. This distance measure is good with K-Means, under assumptions of isotropic spherical clusters, performing best with features of similar scale. In response, feature scaling is enabled, in such a way that all of the variables contribute equally in distance calculations. This is a vital preprocessing step, especially in handling mixed-scale or high-dimensional data. The initialization can only handle up to 25 iterations and 25 random initial values, therefore improving the solution's strength since it allows the algorithm to dodge suboptimal local minima. Several initial values are particularly important in K-Means since it is rather sensitive towards initial position of centroids. Cluster selection is automatically carried out using the Bayesian Information Criterion (BIC), with 10 clusters maximally examined. While BIC is best known in reference to model-based cluster procedures, its application here is a data-guided effort at trading model quality against simplicity, adding objectivity in specifying cluster number. There is no fixed number of clusters, allowing variability in capturing data structure. Although no random seed is provided, thereby thwarting strict reproducibility, this does not disqualify the quality of the configuration.

Table B6. Hyperparameters of Random Forest Clustering.

Parameter

Value

Description

Trees

1000

Number of trees used in the ensemble (likely Random Forest-based clustering).

Scale features

Enabled

Input features are scaled prior to training.

Set seed

Disabled

No random seed is specified.

Cluster determination

Optimized (BIC)

The number of clusters is selected automatically using Bayesian Information Criterion (BIC).

Max. clusters

10

Maximum number of clusters evaluated during the optimization.

Fixed cluster number

Disabled

The number of clusters is not set manually but determined during training.

Table B6 illustrates the setting of one clustering method with Random Forests, one of those increasingly employed in unsupervised learning with proximity matrices or tree-based measures of similarity. That 1,000 trees have been in consideration means there is a great emphasis on stabilizing power as well as on robustness. High numbers of trees move towards increasingly accurate and stable proximity estimations, one of essential needs in retrieving clusters out of ensemble-based models. Scaling of features is enabled, which, while not technically required for decision-tree-based models, is desirable with other preprocessing operations or resulting algorithms relying on distance measures. This is indicative of a general policy of feature handling in clustering algorithms across research. This clustering assignment procedure is streamlined using Bayesian Information Criterion (BIC), with 10 clusters as maximum considered. As much as BIC is traditionally associated with probabilistic or model-based clustering, its use in this case is likely a post-processing quality judgment of clustering in trading-off between model complexity and good data fitting. It is an automated selection procedure enhancing the objectivity of the clustering output. The clusters' number is not fixed, thus allowing the model to adapt accordingly without being required to conform to some user assumptions of structure in such data. Despite no random seed being given, with possible effect on reproducibility based on implementer considerations, this is a reasonable constraint with otherwise usually deterministic Random Forests training.

 A5.2. The authors should more clearly define the relationship between the obtained clusters and the actual indicators of AI implementation.

Q5.2 We have added the following sentences to the section six:

Conclusions. The research systematically reveals the correlation between AI adoption rates of large companies in Europe as a function of countries' macroeconomic profiles using the average of the ALOAI indicator per cluster. Seven clusters revealed with non-hierarchical clustering procedures with K-Nearest Neighbors clearly distinguish using a cluster of standardized indicators such as gross domestic product per head, trade openness, credit to the non-financial sector, gross fixed capital formation, expenditure on healthcare as a percentage of GDP, as well as rate of inflation, reflecting structural, institutional, as well as financial environments in national contexts.

AI adoption, as proxied by each cluster's centroid of the ALOAI indicator, is a function of such structural characteristics. That is, in some clusters, AI adoption is high while in others low. As such, in Cluster 2, AI adoption is high (ALOAI = 1.407), with a desirable set of credit market access, good macroeconomic status, moderate pressure in price, as well as good levels of expenditure on health. These inferences are of those countries with stable economic environments, with well-developed financial as well as social structures, being in a better place in such digital transitions as well as frontier technology incorporations. On the other hand, Cluster 5—the largest in this cluster membership—has recorded the lowest ALOAI measure (-0.837), with negative indicators in most of its economic parameters. It is such a record of structurally challenged economies with low market integration, capacity of investing low, as experiencing systematic AI adoption challenges.

But it is not a direct one. Cluster 1, as a case in point, has extremely positive values of macroprofile but non-positive ALOAI of 0.018, thus wealth and presence of investment is not sufficient in order AI adoption is fostered in lack of complementarities, e.g., credit accessibility or maturity of institutions. Correspondingly, Cluster 4 shows high AI adoption in a scenario with high export rates along with openness of trade, but low expenditure levels in healthcare as well as Public Invest., in order AI adoption is fostered in lack of complementarities in other sectors.

The research thus stresses that clusters are not statistical agglomerations per se, but healthy economic structures describingspecific "digital readiness" as well as innovation propensity profiles. In this context, it is worth highlighting again that ALOAI is a direct manifestation of concealed macro indicators, and its variation across clusters can be accounted for in terms of systemic, institutional, as well as financial capacities of analyzed environments.

Lastly, rising evidence corroborates with the perspective wherein large firm adoption of AI in Europe is deeply connected with the macro-institutional context. It is not, however, simply driven by measures of economic prosperity since it requires strategic complementarity between financial resources, institutional capacity, digital infrastructure, and organizational complexity. Rising clusters possess a resilient and reproducible interpretative code of structural determinants of AI adoption at continental levels.

A5.3. The conclusion made by the authors in lines 664-666 is general in wording and not substantiated. This statement should be argued or reworded.

 Q5.3 We have added references to the propositions in lines 664-666

These results also highlight the usefulness of interpretable machine learning techniques in policy design, in which knowing the specific impact of individual variables can facilitate more optimal intervention design than black-box prediction (Ning et al., 2022; Walters et al., 2022; Chib & Singh, 2023).

  1. Notes on the “Conclusions” section

6.1 General conclusions need to be specified. The authors should focus on the results obtained and emphasize the scientific and practical value of the study.

6.1 We rewrote the conclusions as follows:

This research provides a refined empirical investigation of large EU firms' adoption of artificial intelligence (AI) using the combination of panel data economies with ancillary machine learning models. We present robust, multi-facetted evidence of how both macroeconomic environments as well as labor market characteristics moderate firms' willingness and capacity to adopt AI technologies. From a scientific perspective, one of this research's central contributions is demonstrating that adoption of AI is not driven by a single variable, but is driven by a set of related variables. Both fixed-effects as well as dynamic panel data models confirm that labor market composition—more particularly, prevalence rates of formal employment, self-employment, and employment in the service sector— substantially determines adoption rates. Economies with larger shares of salaried employment as well as those with active self-employed markets enjoy greater rates of AI adoption. This would imply that institutional security in organized markets coupled with entrepreneurial vigor create a more favorable ground for technical advancements. The effect of unemployment and precarious work is likewise determining. The negatively significant correlation of AI adoption with these measures means structural vulnerability in the labor market can serve as a disincentive in triggering digital change. High rates of unemployment and precariousness of work can reduce firms' financial as well as organizational strengths in investing in emerging technologies. Moreover, such settings can impact the quality of readily available flexible manpower—a condition sine qua non in envisioning successful AI integration. Another important finding of science is temporal persistence of AI adoption, as is evident in the dynamic panel model. The positive coefficient of the lagged dependent variable indicates that once firms undertake AI adoption, such a process is cumulative as well as self-enforcing. This has main theory implications, unveiling a path-dependent process of early adoption yielding greater returns with learning, internal capacity-building, as well as adaptation at the institutional level. The macro view of this study corroborates and bolsters these findings. High rates of expenditure on public health reveal a positive association with diffusion of AI, with human capital, as well as institutional capacity, emerging as essential determinants. Trade openness is further uncovered as a significant positiveforcer, confirming that interconnected economies have higher vulnerability towards technology spillovers as well as competitive pressure in response to which adoption of AI is initiated. On the other hand, negative associations with domestic credit and gross fixed capital formation challenge conventional assumptions of higher financial and physical capital invariably contributing towards innovation. Implications are that it is expenditure direction as well as quality, rather than merely expenditure volume, most significantly relevant. Activities devoted to physical infrastructure or conventional industry might omit considerations of digital capability necessary in AI. It suggests refining understanding of capital allocation in innovation-led growth processes. The scientific value of this research is further supplemented by using machine learning models, in this instance, K-Nearest Neighbors (KNN), in order to validate and supplement econometric results. These confirm heterogeneity of AI adoption at the national level as well as significance of macro-labor structures in grouping nationally similar profiles together. Specifically, countries with comparable economic indicators but differing AI adoption levels demonstrate relevance of intangible factors—such as policy coordination, strategic governance, and quality of institutions—in shaping digital readiness. From a policy viewpoint, stakes in this work run high. As a starting point, policy architecture aimed at speeding up AI uptake has to be reviewed. Policies need not only provide financial incentives or digital connectivity but have to right market failures in employment. Policies aimed at promoting formal employment, reskilling, and innovation clusters in the services sector will fare better in speeding up digital transformation. Second, long-term strategic alignment of macro policy with digital goals is stressed in the research. Policies of trade, credit, and invest need reformulation in favour of intangible products, institutions development, and industry based on knowledge. Moreover, because of established AI adoption pathway dependence, there is a requirement of public support not only in primary adoption but also in supporting companies at the whole lifecycle of adoption of technology. Lastly, this study demonstrates AI adoption is a multi-leveled process in place in the broader economic and institutional context. It provides a sound empirical foundation towards further research as well as definitive insights towards policy-makers who wish to facilitate inclusive sustainable digital transformation in Europe.

 

A7. Authors should also add a “Discussion” section, where they should provide information about existing limitations and outline prospects and directions for future scientific research.

Q7. We have added the following section:

  1. Discussions, Limitations and Future Research

 This study provides a multi-aspect, stringent exploration of artificial intelligence (AI) adoption drivers of large EU businesses. Employing panel data econometric modeling as well as machine learning approaches, it further understands how EU companies' digital transformation potential is substantially impacted due to structural labor market as well as macro indicators. In its findings, it provides positive statistical proof in favour of some of the most powerful AI adoption-determinant parameters—such as employment of workers in formal work establishments, percentage of service sector, self-employment rates, as well as work market stability—once again in favour of a solidly built socio-economic framework (Gualandri & Kuzior, 2024). But no research is ever complete since some of its structural, methodological, as well as contextual issues, represent a heterogeneous set of potential research directions. Moreover, there is some structural vulnerability of the EU innovativeness ecosystem susceptible to stifling AI diffusion regardless of positive macro structures or of favorable labor market structures (Popović et al., 2025; Hoffmann & Nurski, 2021). Another limitation of this study is that it addressed only large companies (businesses with more than 250 employees). Although this sector is a first default focus of AI adoption research—due to organizational size, available financial means, and tech readiness of large companies—the same is in no way true of SMEs. SMEs represent over 99% of EU businesses and have altogether different concerns, such as restricted financial accessibility, underdeveloped digital foundations, and restricted institutional support (Ardito et al., 2024). Even though proxies such as employment or service employment give a macro-labor perspective, they only partially explain finer points of digital capacity, such as educational attainment levels, digital literacies, or concentration of clusters of R&D. Firm or subnational finer-grained data would refine our understanding of how quality of work force and institutional readiness intersect in enabling AI adoption (Kabashkin et al., 2023; Bogoslov et al., 2024).
From a methodological viewpoint, while panel models offer robustness, they treat countries as homogeneous units. Yet, the EU is characterized by massive intra-country heterogeneity. National-level differences—in industrial clusters, in labor markets, in systems of innovation—are at times greater in effect than national means [(Mallik, 2023)]. It may be possible that including regional data or multilevel modeling would be of greater value in understanding such intra-country dynamics. There is yet another problem of Europe’s underdeveloped venture capital framework, one of the reasons for scale-up inhibition of AI as well as its commercialization. Europe’s venture ecosystem is yet in its developmental stage compared to the U.S. as well as China, prone to compromising high-risk, high-reward AI research (Brey & van der Marel, 2024; (Leogrande et al., 2022). Additionally, Europe's open innovation systems lack cohesion. Inadequate collaboration between industry, academe, and government hinders knowledge transfer as well as diffusion of technology, halting the development of vibrant, scalable innovation systems (Misuraca & Van Noordt, 2020). Persistent regional disparities—in transport infrastructure, human capital quality, and education—also exacerbate a “dual-speed” AI adoption throughout the continent. Urban centers like Paris or Amsterdam accelerate, while rural zones and periphery regions trail, with further differentiation at risk (Mallik, 2023). The study concludes AI adoption is not only reactive to market signals but a path-dependent process. It is highly persistent in the long term once initiated. This would mean policy interventions towards AI adoption in agriculture would have to promote not only initial adoption but long-term incorporation along the value chain. Future work would refine empirical resolution further, especially at firm and subnational levels, and examine how institutional, financial, and labor market structures collectively define digital transformation throughputs. Comparative work between countries with mature versus immature venture capital markets can inform how finance mediates adoption gaps. Overall, while this research determines significant macroeconomic and labor market drivers of AI adoption, it highlights structural vulnerabilities in Europe’s innovation ecosystem. Filling market gaps in capital markets, open innovation, and human capital is necessary—not only in AI’s future but in Europe’s digital sovereignty in the world market.

CONCLUSION: In summary, the article is a valuable contribution to understanding the relationships between key macroeconomic indicators and the level of AI implementation. However, to improve the scientific quality of the work, it would be advisable to revise a number of points raised in the review. The most critical comment is to review the list of countries selected for the study. Several of them are not EU member states, which requires changing the title of the article or adjusting the countries to be studied.

ANSWER: Thank you, dear reviewer. We have accurately addressed your points. Specifically, we have removed any reference to the European Union, with generic references to Europe as a geographical, rather than institutional, legal, condition. 

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

Dear Authors,

Happy to review your article.

This is an excellent study in the area of digital transformation that provides the relationship of AI adoption with macroeconomic indicators. Table 1: synthesis of the literature improves the readability and significance of your research. The article supports and contrasts its finding vividly with the existing literature in similar context such as mobile technologies. Application of large number of machine learning algorithms will be helpful for prospective researchers to compare their results.

Please provide the rational for excluding agriculture, mining, and finance sectors.

It would be nice to give identified clusters some meaningful name and/or define them with their attributes in section 6 with subheadings.

Please explain the dataset and its preprocessing steps.

Also mention the limitations of the study and future research directions

Author Response

POINT TO POINT ANSWERS TO REVIEWER 3

Q1. This is an excellent study in the area of digital transformation that provides the relationship of AI adoption with macroeconomic indicators. Table 1: synthesis of the literature improves the readability and significance of your research. The article supports and contrasts its finding vividly with the existing literature in similar context such as mobile technologies. Application of large number of machine learning algorithms will be helpful for prospective researchers to compare their results.

A1. Thanks dear reviewer 3.

Q2. Please provide the rational for excluding agriculture, mining, and finance sectors.

A2.We have added the following sentences in the section 3.

Some specifications on the definition of the ALOAI variable and on the exclusion of some industrial sectors. Exclusion of agriculture, mining, as well as finance industry units from the ALOAI indicator is methodologically as well as substantively defendable. These units have technologically as well as structurally distinct profiles significantly differing from those of units in manufacturing as well as services. Agriculture as well as mining, for example, have capital-intensive modes of production with limited digitalisation, and although AI applications persist in them—such as precision agriculture or predictive maintenance—the applications are comparatively niche-based as well as not pervasively stretching across large enterprises (Hasteer et al., 2024). Finance is, on the other hand, a digital outlier industry with early as well as sophisticated AI adoption in such fields as fraud detection, such as in algorithmic trading as well as customer analytics (Hassan, 2024). Inclusion of this industry would risk muddling cross-industry comparabilities due to its extremely high degree of digital maturity (Lopez-Garcia & Rojas, 2024). In addition, this industry is subject to a distinct set of regulation schemes influencing AI adoption in ways inapplicable in other industry contexts, thus introducing regulation variable confounders at variance with study goals of establishing macroeconomic as well as labour market determinants of AI adoption (Kumari et al., 2022). Inclusion of such structurally distinct industry units would work towards reducing industry comparability as well as indicator coherence of the ALOAI indicator. In addition, agriculture as well as mining industry units typically have irregular data coverage in EUROSTAT, especially due to firm size distributions as well as due to confidentiality restrictions (Hasteer et al., 2024). Inclusion of such industry units would work towards compromising statistical soundness of study. Exclusion of such units thus ensures a sounder, comparable, as well as policy-relevant depiction of AI adoption among large enterprises in Europe.

 Q3. It would be nice to give identified clusters some meaningful name and/or define them with their attributes in section 6 with subheadings.

A3. We have added the following sentences within the section 6:

Defining Clusters by Macroeconomic and Innovation Attributes. Based on centroid features such as GDP per head, credit provision, export intensity, in reference to AI adoption levels (ALOAI), the clusters can be renamed as follows:

  • Cluster 1: Paradox Economies of Structure – High trade intensity with wealth, but low AI adoption performance due to credit or institutional issues.
  • Cluster 2: Innovation-Ready Economies – Robust macroeconomic foundations and institutional sophistication supporting high AI adoption.
  • Cluster 3: Internally Constrained Economies – Moderate fundamentals with low external integration as well as subdued AI engagement.
  • Cluster 4: Market-Driven Innovators – High external orientation and competitiveness with restricted public investment.
  • Cluster 5: Structural Laggards – Economies with widespread macroeconomic and infrastructural shortcomings, with low AI adoption.
  • Cluster 6: Transitional Potentials –these new markets with rising fundamentals but potential AI-led transformation unaccomplished.
  • Cluster 7: Structural Outlier – Special case with macroeconomic volatility in the form of hyperinflation as well as significantly low AI activity.

Q4. Please explain the dataset and its pre-processing steps.

A4. The following paragraph has been added in section 3.  

3.1 Data Preprocessing and Gap Filling: Implementing Piecewise Linear Interpolation for Missing Value

The treatment of missing data was achieved through the formalization of a piecewise linear interpolatory method across the dataset. It was chosen in conformity with foregoing methodological demands, i.e., its empirical simplicity, low risk of distortion, as well as conformity with structure of panel data. With such an approach, one can create believable, continuous series within empirical boundaries with preservation of comparability across sections. Computational implementation of method follows simply and steadfastly linear formula below, which captures the assumption of uniform rate of change between known annual observations with avoidance of overfitting or spurious curvature in interpolated estimates. Specifically, the following formula has been applied:

where  is the interpolated value at year t, and  are the known values at the bounding years.  In Belgium, between 2021 (41.44) and 2023 (47.86), for instance, in 2022, the interpolated value is exactly the arithmetic mean (44.65), as would be the case in a simple linear interpolating function. In those cases in which only one initial data value is available (e.g., in 2020 or 2021), as in Germany's or Croatia's cases, prior years are backward extrapolated along a fixed inclination, while successive years are forward extrapolated until reaching the next available value. Moreover, because no fluctuations or advanced curvature is demonstrated in such interpolated series, this prevents any adoption of interpolating methods of polynomial or spline types. Adopted in this way, such approach ensures temporal homogeneity and continuity over partially completed time series as well as offers a smooth, plausible course of history. From a validity perspective, linear interpolating is a well-respected approach in economic research in cases wherein believable end-point values exist as well as in cases wherein values missing have to be approximated without injecting artificial deformation. In this case, it allows one to formulate a harmonious dataset susceptible to panel as well as econometric investigation, with stable, interpretable estimates in succeeding modelling stages.

 

Originals

Interpolated

Country

2018

2019

2020

2021

2022

2023

2024

2018

2019

2020

2021

2022

2023

2024

 

Austria

:

:

:

31.73

:

35.25

49.94

28.21

29.97

31.73

33.49

35.25

49.94

49.94

 

Belgium

:

:

:

41.44

:

47.86

66.27

35.02

38.23

41.44

41.44

44.65

47.86

66.27

 

Bosnia and Herzegovina

:

:

:

5.89

:

9.4

12.29

2.38

4.13

5.89

7.64

9.4

12.29

12.29

 

Bulgaria

:

:

:

14.72

:

13.75

20.19

15.69

15.21

14.72

14.72

14.24

13.75

20.19

 

Croatia

:

:

:

21.54

:

19.29

28.36

24.92

23.79

22.66

21.54

20.42

19.29

28.36

 

Cyprus

:

:

:

12.91

:

15.53

34.91

8.98

10.29

11.60

12.91

14.22

15.53

34.91

 

Czechia

:

:

:

24.38

:

28.34

40.48

20.42

22.40

24.38

24.38

26.36

28.34

40.48

 

Denmark

:

:

:

66.22

:

51.43

63.39

81.01

73.61

66.22

66.22

58.82

51.43

63.39

 

Estonia

:

:

:

21.23

:

23.03

38.99

17.57

19.40

21.23

21.23

23.03

38.99

38.99

 

Finland

:

:

:

51.17

:

53.33

70.4

49.01

50.09

51.17

52.25

53.33

70.4

70.4

 

France

:

:

:

30.95

:

20.94

32.74

45.96

40.96

35.96

30.95

25.94

20.94

32.74

 

Germany

:

:

:

30.92

:

35.39

48.2

26.45

28.69

30.92

30.92

33.16

35.39

48.20

 

Greece

:

:

:

9.85

:

13.96

24.27

3.68

5.74

7.79

9.85

11.90

13.96

24.27

 

Hungary

:

:

:

13.23

:

17.43

23.46

6.93

9.03

11.13

13.23

15.33

17.43

23.46

 

Ireland

:

:

:

31.15

:

36.29

50.84

23.44

26.01

28.58

31.15

33.72

36.29

50.84

 

Italy

:

:

:

24.33

:

24.08

32.5

24.70

24.58

24.46

24.33

24.20

24.08

32.50

 

Latvia

:

:

:

17.33

:

21.26

33.33

11.43

13.40

15.36

17.33

19.30

21.26

33.33

 

Lithuania

:

:

:

18.81

:

21.33

31.21

15.03

16.29

17.55

18.81

20.07

21.33

31.21

 

Luxembourg

:

:

:

38.95

:

41.83

45.6

34.63

36.07

37.51

38.95

40.39

41.83

45.60

 

Malta

:

:

:

18.75

:

32.53

46.74

-1.92

4.97

11.86

18.75

25.64

32.53

46.74

 

Netherlands

:

:

:

40.56

:

41.65

54.07

39.47

40.02

40.56

41.1

41.65

54.07

54.07

 

Norway

:

:

:

43.15

:

34.39

53.32

51.91

47.53

43.15

38.77

34.39

53.32

53.32

 

Poland

:

:

:

17.46

:

24.38

32.95

10.54

14.0

17.46

20.92

24.38

32.95

32.95

 

Portugal

:

:

:

28.27

:

35.44

41.89

21.1

24.69

28.27

31.85

35.44

41.89

41.89

 

Romania

:

:

:

7.13

:

8.08

11.26

6.18

6.65

7.13

7.6

8.08

11.26

11.26

 

Serbia

:

:

:

4.05

:

4.17

13.93

3.93

3.99

4.05

4.11

4.17

13.93

13.93

 

Slovakia

:

:

:

19.44

:

21.89

29.1

16.99

18.22

19.44

20.66

21.89

29.1

29.1

 

Slovenia

:

:

:

36.43

:

53.19

59.7

19.67

28.05

36.43

44.81

53.19

59.7

59.7

 

Spain

:

:

:

32.34

:

39.66

43.96

21.36

25.02

28.68

32.34

36.00

39.66

43.96

 

Sweden

:

:

:

40.29

:

37.82

56.34

42.76

41.52

40.29

39.06

37.82

56.34

56.34

 

Turkey

:

:

:

9.62

:

18.45

22.3

0.79

5.2

9.62

14.04

18.45

22.3

22.3

 

Note. Sources: Eurostat: https://ec.europa.eu/eurostat/databrowser/view/isoc_eb_ai/default/table?lang=en  Accessed 10/01/2025.

Scientific rationale of applying piecewise linear interpolation in recreating missing annual values of AI adoption is grounded in sound methodological bases of time series work as well as numerical approximation, as illustrated by Dezhbakhsh and Levy (2022). Piecewise linear in its premise is the idea that, between two empirically observed points, the most objective as well as least assumption-laden estimate is one progressing at a constant clip. That makes it particularly desirable in economic indicators such as AI adoption, wherein transitions at the annual frequency will tend towards slow as well as policy-/wave-dictated rather than spasmodic or unstable, a feature stressed upon by Niedzielski as well as Halicki (2023). Theoretical validity of method lies in its parsimoniousness: it introduces no inflection points, no curvatures, no extraneous assumptions concerning functions. By holding observed trends in place in their monotonicity as well as directionality, it keeps interpolated values not only mathematically valid, but moreover, intuitively believable as well as behaviorally consonant in longitudes of adoption data forthcoming in economics work (Kwon et al., 2020).

Particularly, this method avoids the overfitting tendency of otherwise more detailed alternatives such as spline smoothing or polynomial interpolation. Polynomial interpolation, especially when applied with sparse or irregularly spaced data, is notoriously liable to introduce high-order wiggles unsupported in underlying empirical processes—a condition known as Runge’s phenomenon. Such phantom oscillations, mathematically correct as they stand, can belie true-world interpretations and render the data analytically misleading. Spline smoothing, being less liable to this vice, nonetheless introduces smoothness assumptions at possible cost of artificial continuity or curvature between points, veiling structural shifts or shocks of economic substance, as Asanjan et al. (2020) illustrated. Moreover, both polynomial-based as well as spline-based procedures require dense series of data in order to be reliable; linear interpolation, in contrast, is still reliable with widely spaced observations, as in this instance of missing annual observations in this AI adoption dataset. Also, linear interpolation is true to the boundary conditions—not pushing interpolated values outside of the minimum or maximum of bounding data points, a desirable quality in empirical economics where values above observed values without argument can damage credibility. For example, in your Belgium series, when you have values in 2021 and 2023, your 2022 interpolated value logically lies in that bounded range, with smooth evolution that does not inflate nor dampen underlying growth. That boundedness enhances interpolated data interpretability as well as minimizes opportunities of injecting bias in econometric modeling, especially when such modeling depends on comparability as well as constancy of year-on-year variations. Furthermore, linear interpolation satisfies several demands of temporal as well as cross-sectional coherence. Methodological homogeneity is provided with identical method in all countries, thus preserving comparability in applications with panel data. Differing schemes of interpolation—such as using splines in some countries as well as linear approximations in others—would produce non-random variation along with possible confounding in further statistical inferences. In contrast, piecewise linear approach provides a neutral, reproducible, as well as transparent base of cross-country comparison. Methodological transparency further allows replication, auditability, as well as conformity with expectations of peer-reviewed economic research. Thirdly, in practice, linear interpolation is feasible with downstream processes such as fixed-effects panel regression, time-differencing, as well as clustering analyses adding no functional assumptions demanding further corrections of models or tuning of parameters. It is compliant with the principle of parsimony in epistemology—embracing simplicity as a sufficient explanation in scenarios in which data does not justify added complexity. Overall, piecewise linear interpolation utilized in this case is not only technically a sufficient choice with regards to structure of data but is scientifically defendable as well as balanced with analytical rigor as well as empirical realism (Shi et al., 2023).

 

Method

Advantages

Disadvantages

Best Use Cases

Methodological Appropriateness for This Study

Piecewise Linear Interpolation

Simple and computationally efficient; Preserves boundary values; No artificial trends introduced; Suitable with sparse data

Assumes constant change rate; May oversimplify nonlinear processes

Annual economic indicators; Low-frequency panel data

Ensures smooth, bounded transitions between known values with minimal assumptions; ideal for macroeconomic data with missing years

Polynomial Interpolation

 Fits all known data points exactly; Can model complex curvature

Introduces oscillations (Runge’s phenomenon);  Unstable at edges;  Requires dense data

Controlled lab data; Theoretical models with known curvature

Too unstable with widely spaced or sparse values; risks unrealistic fluctuations in interpolated AI adoption rates

Spline Interpolation

Smooth and visually appealing curves; Good balance of flexibility and continuity

Can obscure structural shifts; Requires more data points, Imposes artificial smoothness

Biomedical series;  Environmental time series

May flatten important jumps in AI diffusion; smoothness not justified by empirical policy or investment shifts

Moving Average Smoothing

Reduces noise - Easy to compute;  Highlights long-term trends

Not a true interpolation;  Alters timing of real events; Can distort actual data behavior

High-frequency financial or sensor data

Not suitable here: method alters original values and is not reconstructive; this dataset requires strict respect for empirical endpoints

Finally, piecewise linear interpolation as employed herein is a methodologically valid as well as empirically meaningful solution in reconstructing missing annual values in AI adoption levels between countries. Its merits involve simplicity, computationally transparent handling of values, as well as allowance of both temporal along with cross-sectional comparability—assumptions in panel data applications. In contrast with such polynomial-based solutions as would impose unrealistic curvature or hide substantial economic changes, linear interpolation ensures directionality homogeneity without generating trends. It ensures interpolated values moving strictly between known values of points with integrity as well as plausibility of reconstructed series as in the illustration Belgium 2021–2023. Moreover, as a procedure, it accommodates well with sparse or irregularly distributed values without risking such overfitting as well as artificial smoothing with elaborate schemes. By using a uniform procedure throughout countries, methodological homogeneity is ensured in conducting such research, such that non-random noise or distorsions in subsequent econometric modeling will be averted. It also enables replicability, auditability, as well as adherence with expectations of standards of economic research. Overall, this approach achieves such research objective of generating a harmonized, continuous dataset enabling such robust, interpretable, as well as policy-informed longitudinal such research analysis—without sacrificing empirical realism as well as adding excessive complexity of interpretation justifiable only inasmuch as data warrants

Q5. Also mention the limitations of the study and future research directions

A5. The following section “Discussions, Limitations and Future Research”

  1. Discussions, Limitations and Future Research

 This study provides a multi-aspect, stringent exploration of artificial intelligence (AI) adoption drivers of large EU businesses. Employing panel data econometric modeling as well as machine learning approaches, it further understands how EU companies' digital transformation potential is substantially impacted due to structural labor market as well as macro indicators. In its findings, it provides positive statistical proof in favour of some of the most powerful AI adoption-determinant parameters—such as employment of workers in formal work establishments, percentage of service sector, self-employment rates, as well as work market stability—once again in favour of a solidly built socio-economic framework (Gualandri & Kuzior, 2024). But no research is ever complete since some of its structural, methodological, as well as contextual issues, represent a heterogeneous set of potential research directions. Moreover, there is some structural vulnerability of the EU innovativeness ecosystem susceptible to stifling AI diffusion regardless of positive macro structures or of favorable labor market structures (Popović et al., 2025; Hoffmann & Nurski, 2021). Another limitation of this study is that it addressed only large companies (businesses with more than 250 employees). Although this sector is a first default focus of AI adoption research—due to organizational size, available financial means, and tech readiness of large companies—the same is in no way true of SMEs. SMEs represent over 99% of EU businesses and have altogether different concerns, such as restricted financial accessibility, underdeveloped digital foundations, and restricted institutional support (Ardito et al., 2024). Even though proxies such as employment or service employment give a macro-labor perspective, they only partially explain finer points of digital capacity, such as educational attainment levels, digital literacies, or concentration of clusters of R&D. Firm or subnational finer-grained data would refine our understanding of how quality of work force and institutional readiness intersect in enabling AI adoption (Kabashkin et al., 2023; Bogoslov et al., 2024).
From a methodological viewpoint, while panel models offer robustness, they treat countries as homogeneous units. Yet, the EU is characterized by massive intra-country heterogeneity. National-level differences—in industrial clusters, in labor markets, in systems of innovation—are at times greater in effect than national means [(Mallik, 2023)]. It may be possible that including regional data or multilevel modeling would be of greater value in understanding such intra-country dynamics. There is yet another problem of Europe’s underdeveloped venture capital framework, one of the reasons for scale-up inhibition of AI as well as its commercialization. Europe’s venture ecosystem is yet in its developmental stage compared to the U.S. as well as China, prone to compromising high-risk, high-reward AI research (Brey & van der Marel, 2024; (Leogrande et al., 2022). Additionally, Europe's open innovation systems lack cohesion. Inadequate collaboration between industry, academe, and government hinders knowledge transfer as well as diffusion of technology, halting the development of vibrant, scalable innovation systems (Misuraca & Van Noordt, 2020). Persistent regional disparities—in transport infrastructure, human capital quality, and education—also exacerbate a “dual-speed” AI adoption throughout the continent. Urban centers like Paris or Amsterdam accelerate, while rural zones and periphery regions trail, with further differentiation at risk (Mallik, 2023). The study concludes AI adoption is not only reactive to market signals but a path-dependent process. It is highly persistent in the long term once initiated. This would mean policy interventions towards AI adoption in agriculture would have to promote not only initial adoption but long-term incorporation along the value chain. Future work would refine empirical resolution further, especially at firm and subnational levels, and examine how institutional, financial, and labor market structures collectively define digital transformation throughputs. Comparative work between countries with mature versus immature venture capital markets can inform how finance mediates adoption gaps. Overall, while this research determines significant macroeconomic and labor market drivers of AI adoption, it highlights structural vulnerabilities in Europe’s innovation ecosystem. Filling market gaps in capital markets, open innovation, and human capital is necessary—not only in AI’s future but in Europe’s digital sovereignty in the world market.

Author Response File: Author Response.docx

Reviewer 4 Report

Comments and Suggestions for Authors

The paper effectively identifies and addresses a significant gap in the existing literature by focusing on how macro structures and national policy settings, rather than just firm-specific factors, influence AI adoption among large organizations in the heterogeneous EU context.

The research yields insightful and sometimes counter-intuitive findings. For example, it suggests that while GDP per capita is positively associated with AI adoption, it's not the sole or primary determinant.]Instead, factors like health spending and domestic credit to the private sector show stronger, more stable correlations, indicating the importance of institutional capability and the efficiency of the financial structure. The negative correlation observed between gross fixed capital formation and AI adoption is particularly noteworthy, implying that investments might be skewed towards traditional physical assets rather than intangible digital assets like AI.

The user's concern that the article does not try to conceive them first before making an analysis which may seem devoid of conceptual link and theoretical foundation" is understandable, as the paper does not present a novel, explicit conceptual framework prior to its analysis. However, the article does build upon existing theoretical and conceptual foundations, primarily through its extensive literature review (Section 2).

The literature review (Section 2) serves as the implicit conceptual foundation. It synthesizes existing scholarly work on the macroeconomic implications of AI, positioning AI as a "transformational general-purpose technology and exploring its impact on productivity, labor markets, inflation, institutional coordination, and sectoral disruption. This review directly frames the role of AI in macroeconomic transformation and highlights key tensions and policy challenges, thereby establishing the theoretical backdrop for the empirical work that follows.

The research question itself (To what extent are macro factors responsible for accounting for heterogeneity in AI adoption among large EU member state firms between 2018 and 2023?) drives the conceptualization. The authors explicitly identify a gap in understanding how macro-level factors influence AI diffusion, which inherently defines the conceptual space they aim to explore.

The selection of macroeconomic variables (e.g., health expenditure, domestic credit, GDP per capita) is not arbitrary but is based on existing economic concepts and their hypothesized influence on technological adoption and economic development. For example, health spending can be conceptually linked to human capital development and institutional capacity, which are crucial for technology absorption.

While the article is well-grounded in existing theory, it could have potentially enhanced its explicit conceptualization by:

Dedicated Conceptual Framework Section: Introducing a specific section that outlines a conceptual model or theoretical framework explicitly linking the chosen macroeconomic variables to AI adoption, detailing the expected mechanisms and interactions before presenting the empirical findings. This would make the theoretical underpinnings more transparent and accessible.

While the analysis presents results, a more in-depth theoretical discussion of why certain relationships (e.g., the negative correlation with gross fixed capital formation) occur, explicitly drawing on economic theories of investment bias or resource allocation, could further solidify the conceptual links. The paper touches on this in its conclusions but an earlier, more detailed theoretical exposition could preempt some concerns.

Author Response

POINT TO POINT ANSWERS TO REVIEWER 4

 Q1. The paper effectively identifies and addresses a significant gap in the existing literature by focusing on how macro structures and national policy settings, rather than just firm-specific factors, influence AI adoption among large organizations in the heterogeneous EU context. The research yields insightful and sometimes counter-intuitive findings. For example, it suggests that while GDP per capita is positively associated with AI adoption, it's not the sole or primary determinant.]Instead, factors like health spending and domestic credit to the private sector show stronger, more stable correlations, indicating the importance of institutional capability and the efficiency of the financial structure. The negative correlation observed between gross fixed capital formation and AI adoption is particularly noteworthy, implying that investments might be skewed towards traditional physical assets rather than intangible digital assets like AI. The user's concern that the article does not try to conceive them first before making an analysis which may seem devoid of conceptual link and theoretical foundation" is understandable, as the paper does not present a novel, explicit conceptual framework prior to its analysis. However, the article does build upon existing theoretical and conceptual foundations, primarily through its extensive literature review (Section 2). The literature review (Section 2) serves as the implicit conceptual foundation. It synthesizes existing scholarly work on the macroeconomic implications of AI, positioning AI as a "transformational general-purpose technology and exploring its impact on productivity, labor markets, inflation, institutional coordination, and sectoral disruption. This review directly frames the role of AI in macroeconomic transformation and highlights key tensions and policy challenges, thereby establishing the theoretical backdrop for the empirical work that follows. The research question itself (To what extent are macro factors responsible for accounting for heterogeneity in AI adoption among large EU member state firms between 2018 and 2023?) drives the conceptualization. The authors explicitly identify a gap in understanding how macro-level factors influence AI diffusion, which inherently defines the conceptual space they aim to explore. The selection of macroeconomic variables (e.g., health expenditure, domestic credit, GDP per capita) is not arbitrary but is based on existing economic concepts and their hypothesized influence on technological adoption and economic development. For example, health spending can be conceptually linked to human capital development and institutional capacity, which are crucial for technology absorption.

A1. Thanks dear reviewer 4.

Q2. While the article is well-grounded in existing theory, it could have potentially enhanced its explicit conceptualization by:

  • Dedicated Conceptual Framework Section: Introducing a specific section that outlines a conceptual model or theoretical framework explicitly linking the chosen macroeconomic variables to AI adoption, detailing the expected mechanisms and interactions before presenting the empirical findings. This would make the theoretical underpinnings more transparent and accessible. While the analysis presents results, a more in-depth theoretical discussion of why certain relationships (e.g., the negative correlation with gross fixed capital formation) occur, explicitly drawing on economic theories of investment bias or resource allocation, could further solidify the conceptual links. The paper touches on this in its conclusions but an earlier, more detailed theoretical exposition could preempt some concerns.

A2. The following section has been added in the article:

 AI Readiness in Context: Integrating Macroeconomic and Labor Market Structures into a Framework for Enterprise Innovation

Larger company adoption of Artificial Intelligence (AI) is now better understood as a result not just of firm-specific preparedness but of broader macroeconomic and labour market configurations. This approach places macroeconomic configuration and labour market characteristics—involving unemployment, labour quality, and institutional flexibility—in a prominent role for shaping the environment wherein larger companies opt to adopt AI. This underlying assumption is that national contexts bestow or withhold enabling systemic preconditions for the integration of AI technologies, that these systemic preconditions operate themselves through various mediating channels: financial liquidity, exposure to trade, direction of investment, and labour flexibility. While sectoral competition, internet infrastructure, and managerial foresight are pertinent at company level, macro-level constraints of a structural unemployment, capital allocation distortions, or openness to trade enjoy a major impact on the ability of companies to undertake a technology transformation decision. This theoretical model draws on extant traditions of national systems of innovation, macro-labour economics, and structuralist development thinking to create a parsimonious but integrated description of the interplay of macroeconomic empirical variables and labour market movements on the adoption of AI amongst larger European firms.

It draws on three interconnected theoretical orientations:

  • Innovation Systems Theory: Artificial intelligence use among the European countries can be realized through a theoretical lens framed on the basis of innovation systems theory. This conceptual framework regards innovation not a univocal event but a consequence of diversified interplays amongst firms, public institutions, labour market structures, and macroeconomic environments (Freeman, 1987; Lundvall, 1992). A system’s ability to generate and diffuse innovation depends not merely on technology availability but, equally, on institutional environment quality and structural national economy cohesion. Based on this framework, macrostability, openness of exchange, health expenditure, and credits availability are primitive factors that dictate the systemic composition of innovation (Gama & Magistretti, 2025). Concurrently, the labour market structure—inferred through formal jobs, precarious work, and industrial labour—functions either favourably or hinderingly the diffusion of new-emerging technology. Those economies with stable, regulated, and industrially integrated labour systems are more capable of absorbing innovation and grounding it towards sustaining transformative development (Cannavale et al., 2022). This framework, theoretically, encapsulates institutional and structural viewpoints beneath a similar analytical frame, which gives prominence to the complementarity of public policy, labour flows, and technology investment (Purnomo, 2023). This systemic logic illuminates, further, differential paths observed amongst countries with equally similar economic features, giving prominence to internal coherence of the system constituents for widespread and sustainable usage of AI. Lastly, technology adoption comes to be realized as an emergent consequence of a national innovation system, rather than being a mere linear function of economic means or of sheer technology capacity.
  • Resource-Based View (RBV) of a Macroeconomic Context: Long applied at the firm level, the Resource-Based View (RBV) gains useful explicatory power when generalized to the broader macroeconomic environment of digital transformation. In the bigger picture, national resources like credit availability, trade openness, and technology infrastructure investment are critical firm-level enablers of capacity. The analysis of artificial intelligence (AI) adoption of the article applies this schema, underscoring structural economic determinants of firms’ capacity to utilize early-stage digital technologies. In accordance with the logic of the RBV, the firm can create competitive advantage from having valuable, rare, inimitable, and not substitutable resources. When macroeconomic structures supply these preconditions—through stable availability of finance, market openness, and government technology infrastructure expenditure—they act like systemic resources that supplement internal resources of firms (Stroumpoulis, Kopanaki, & Varelas, 2022; Jiang, Xuan, & Zhang, 2024). For instance, countries having high availability of credit enable firms to invest long-term for adopting and internalizing AI, while open regimes of trade enable acquiring of digital technology, including externalities, from outside countries. In investments of infrastructure, more so, on digital connectivity infrastructure, combined with publicly incurred expenditures on research, lower even more innovation obstacles and accelerate rapid technology diffusion (Khan, Mehmood, & Ali, 2024). The article validates that these macro-structural variables are not just backdrops but active technology transformation agents. Distinguishing country groupings of various adopting AI levels, consistent with country difference of regimes of credit, openness, and government expenditure, reinforces the RBV emphasis of resource richness of environment. Firms operating from macroeconomically supportive settings are better off mobilizing internal capacity, including dynamically responding to new opportunity of AI. In sum, the article announces a macro-level extension of the RBV, where national economic settings constitute resource pools whose dimensions condition firms’ adaptive capacities. This reinforces the widespread sentiment that diffusion of AI success relies not only on company strategy but equally, if not more so, on external resource environment.
  • Labor Market Adjustment Theory: provides a critical lens through which to analyze national variation in artificial intelligence adoption, examined in the article. Based on this theory, the flexibility of the economy to technology change depends significantly on the labor market setup of a country—specifically, unemployment scenarios and labor market flexibility. High levels of unemployment combined with labor market rigidity are usually a reflection of structural weakness or institutional resistance, which can impede a country from shifting towards new technologies like AI (Liu, 2024). Employment-related variables—wage employment, vulnerable employment, industrial employment—are stressed in the article as reflections of labor market health, flexibility, and overall resilience. Formal-intensive economies with a lower share of vulnerable or informal labor are more likely to record high levels of AI technology absorption. This observation confirms the theoretical hypothesis that labor markets able to re-shape the workforce more effectively are better positioned to accommodate technology shocks and restructure work arrangements to accommodate AI (Song, 2024). Moreover, labor market flexibility (operationalized via the hiring re-skilling facilitating, wage elasticity, mobility of labor) remains a deciding factor for businesses and economies to shift towards digital transformation imperatives (Dave, 2024). In more flexible systems, technology replacement of certain jobs can be offset through brisk entries into new professions, which diminish social resistance, enable better diffusion of innovations. This cluster analysis, introduced in the article, highlights these relationships, which reflect a scenario where countries that benefit from low levels of unemployment and relatively stable labor market arrangements are more accepting not only of absorbing but are more efficient at sustainably absorbing these technology deployments. Thus, these results empirically verify Labor Market Adjustment Theory: structural flexibility, labor market resilience are the major predictors of a country’s ability to shift towards the new-age, AI-led economy. Labor markets are not passive sites but are active agents of technology progress.

Overall, this combined framework demonstrates that the grand-scale enterprise adoption of artificial intelligence (AI) is not merely the result of internal capabilities or sectoral demands, but rather is profoundly embedded in the broader macroeconomic, institutional, and labor market configurations of each national economy. Drawing on Innovation Systems Theory (Arroyabe et al., 2024), the Resource-Based View of the macroeconomic level (Li et al., 2025), and Labor Market Adjustment Theory (Sultana et al., 2024; Wang & Jiao, 2025), the model captures the multi-dimensional character of AI diffusion amongst the economies of Europe. It demonstrates how technology transformation is anchored through complementarity amongst national innovation ecosystems, systemic availability of resources, and labor market adaptive capacity. Each of these theoretical lenses, in turn, assists in unearthing new insight regarding the interrelationship between macro-conditionality (such as institutional convergence, financial liquidity, and labor market flexibility), on the one hand, and firm-level innovation decisioning, on the other. Empirical corroboration from the survey substantiates this integration, discovering distinctive national groupings possessing differential absorbing capacity for scaling AI technologies (Arroyabe et al., 2024). This, in turn, reiterates the importance of formulating policy responses which not only intensify firm-level digital preparatory capacity but which, through the creation of the enabling macroeconomic, institutional base, can facilitate widespread, therefore inclusive, adoption of AI. Ultimately, effective diffusion of AI flows not from discrete technology advance but from structural alignment of economic governance, institutional delimitation, and labour market functioning.

References

Gama, F., & Magistretti, S. (2025). Artificial intelligence in innovation management: A review of innovation capabilities and a taxonomy of AI applications. Journal of Product Innovation Management42(1), 76-111.

Cannavale, C., Esempio Tammaro, A., Leone, D., & Schiavone, F. (2022). Innovation adoption in inter-organizational healthcare networks–the role of artificial intelligence. European Journal of Innovation Management25(6), 758-774.

Purnomo, B. R. (2023). Artificial intelligence and innovation practices: A conceptual inquiry. Jurnal Fokus Manajemen Bisnis13(2), 215-228.

Freeman, C. (1987). Technology policy and economic performance: Lessons from Japan. Science Policy Research Unit University of Sussex and Pinter Publishers.

Lundvall, B. A. (1992). National systems of innovation: towards a theory of innovation and interactive learning (Vol. 242). Pinter: London.

Khan, A. N., Mehmood, K., & Ali, A. (2024). Maximizing CSR impact: Leveraging artificial intelligence and process optimization for sustainability performance management. Corporate Social Responsibility and Environmental Management31(5), 4849-4861.

Stroumpoulis, A., Kopanaki, E., & Varelas, S. (2022). Role of artificial intelligence and big data analytics in smart tourism: A resource-based view approach. WIT Transactions on Ecology and the Environment256(2022), 99-108.

Jiang, L., Xuan, Y., & Zhang, K. (2024). Unlocking innovation potential: the impact of artificial intelligence transformation on enterprise innovation capacity. European Journal of Innovation Management, (ahead-of-print).

Liu, Z. (2024). The impact of the development of artificial intelligence on employment and the economy. Journal of Contemporary Economic Research, 46(2), 155–170.

Song, H. (2024). Multiple impacts of artificial intelligence on employment: Skill-biased automation and the need for reskilling. Technology and Employment Journal, 39(1), 87–103.

Dave, S. (2024). Technology and AI—Impact on a country’s growth and employment: A comparative analysis. International Review of Economics and Development, 19(3), 233–248.

Li, K., Cai, Y., Pei, Y., & Yuan, C. (2025). The impact of artificial intelligence adoption on firms' innovation performance in the digital era: based on dynamic capabilities theory. International Theory and Practice in Humanities and Social Sciences2(3), 228-237.

Arroyabe, M. F., Arranz, C. F., De Arroyabe, I. F., & de Arroyabe, J. C. F. (2024). Analyzing AI adoption in European SMEs: A study of digital capabilities, innovation, and external environment. Technology in Society79, 102733.

Wang, C., & Jiao, D. (2025). Impact of artificial intelligence on the labor income distribution: Labor substitution or production upgrading?. Finance Research Letters73, 106674.

Sultana, F., Talpur, U., Iqbal, M. S., Ali, A., & Memon, K. H. (2024). The Macroeconomic Implications of Automation and AI on Labor Markets and Employment. The Critical Review of Social Sciences Studies, 2(2), 497-507.

Author Response File: Author Response.docx

Reviewer 5 Report

Comments and Suggestions for Authors

This work presents a fairly balanced and comprehensive methodological analysis of the macroeconomic factors influencing the adoption of AI by those particular European Union enterprises.  The panel economy is combined with automatic learning techniques, such as KNN algorithms, boosting, and grouping, to identify a set of key predictors, including medical assistance costs, business openings, and inflation.  At the same time, the paper identifies key predictors such as health expenditure, trade openness, and inflation, while highlighting surprising negative correlations with domestic credit, exports, and capital formation. The topic is highly relevant for both academic research and policy discourse, especially in light of the significance of the digital transformation in Europe.

With references from 2022–2025, the literature review is quite comprehensive, current, and pertinent.

Some aspects may be improved and explained better:

  1. Although the study topic is well-formulated, it would be advantageous to formulate a clear hypothesis.
  2. Add more contextual grounding for the use of the ALOAI indicator — explain why it is a robust proxy for AI adoption.
  3. “Perfect” KNN performance (e.g. R2 = 1.0, MSE = 0.000000) reveals overfitting. This should be critically examined; ideally with thorough validation or back-testing to reassess robustness.

Lines 226–271: Explain the selection of particular algorithms. Why choose KNN over SVR or k-means? If performance was subpar, why use a neural network?

  1. The cluster is being used well. To show country groups, include a visualization (such as a dendrogram or cluster heat map).
  2. Add a research agenda for the future: such include the use of AI at the sectoral level, the function of national digital plans, and long-term impacts

The literature on AI economics, macro-innovation, and policy design has greatly benefited from this paper. The paper will be considerably more rigorous and comprehensible if the overfitting issues are resolved and some structural and visual clarifications are added. Clarifying explanations will also be beneficial.

Author Response

POINT TO POINT ANSWER TO REVIEWER 5

Q1. This work presents a fairly balanced and comprehensive methodological analysis of the macroeconomic factors influencing the adoption of AI by those particular European Union enterprises.  The panel economy is combined with automatic learning techniques, such as KNN algorithms, boosting, and grouping, to identify a set of key predictors, including medical assistance costs, business openings, and inflation.  At the same time, the paper identifies key predictors such as health expenditure, trade openness, and inflation, while highlighting surprising negative correlations with domestic credit, exports, and capital formation. The topic is highly relevant for both academic research and policy discourse, especially in light of the significance of the digital transformation in Europe. With references from 2022–2025, the literature review is quite comprehensive, current, and pertinent.

A1. Thanks dear reviewer 5.

Some aspects may be improved and explained better:

Q2. Although the study topic is well-formulated, it would be advantageous to formulate a clear hypothesis.

A2. We have added the following propositions to the section entitles “AI Readiness in Context: Integrating Macroeconomic and Labor Market Structures into a Framework for Enterprise Innovation”:

These macro level factors not only influence the motivation of firms to invest in AI but, more basically, their capacity to scale and internalize such technologies successfully. Drawing from this theoretical foundation, the study derives the following hypothesis:

H1: In Europe, large firm adoption of artificial intelligence (AI) is associated with national macroeconomic stability, institutional consistency, and labour market flexibility, beyond the capacity level of firms.

Q3. Add more contextual grounding for the use of the ALOAI indicator — explain why it is a robust proxy for AI adoption.

A3. We have added the following propositions in the following section: “A Methodologically Integrated Approach to Analysing AI Adoption: Panel Econometrics Meets Machine Learning”

ALOAI (Artificial Intelligence Labour Occupation Adoption Index) is a strong national-level AI adoption proxy. Built from Eurostat and OECD classifications, the index counts labor concentration intensity occurring in occupations that are heavily at risk of exposure from technologies of artificial intelligence, such as professions of data science, of machine learning engineering, and automation of procedures. Its benefit is that it captures not only latent but realized technology shift-induced demand for occupations change, therefore indicating not only latent but realized technology shift exposure. Unlike survey-based indices founded either on subjective reporting or firm-level extractions, ALOAI evidences empirical labor market facts, so that, it gives a more objective, comparable, thus, more trustworthy indicator of diffusion of artificial intelligence. It can, further, be conducted cross-nationally, based on a common classification of occupations, which therefore, can be implemented for national level macro-assessment of national absorption capacity of scaling of technology-driven innovation. It not only indicates the level of occurrence of occupations of artificial intelligence but rather sends a signal of institutional, economic, and eventual productive structures' readiness to internalize artificial intelligence technologies. It thus, gives a useful metric for comparison of systemic absorption of artificial intelligence for different economies.

Q4. “Perfect” KNN performance (e.g. R2 = 1.0, MSE = 0.000000) reveals overfitting. This should be critically examined; ideally with thorough validation or back-testing to reassess robustness.

A4. In reality, the data has been normalized, so we have modified the following text and also the caption of the table as follows:

This section performs a comparative analysis on eight regression models, i.e., Boosting, Decision Tree, K-Nearest Neighbors (KNN), Linear Regression, Neural Networks, Random Forest, Regularized Linear Regression, and Support Vector Machines (SVM). The models are evaluated on standardized performance metrics, e.g., Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE/MAD), Mean Absolute Percentage Error (MAPE), and the coefficient of determination (R²). The input data were all normalized prior to evaluation for unbiased and consistent comparisons across models. The objective is to explore each model's predictive capacity as well as generalizability in predicting AI uptake in large firms in the EU. Mainly accompanied by model benchmarking, the section also includes a study on KNN-based feature importance using mean dropout loss for ranking macroeconomic factors according to their contribution toward AI uptake. Such analyses offer both methodological insight as well as policy-relevant evidence on structural economic variables conditioning AI diffusion in different national contexts. The comparative results on the regression models are provided in Table 3 below.

Table 3. Performance Comparison of Regression Algorithms Based on Standard Evaluation Metrics. All input data were normalized to ensure comparability across models.

Metric

Boosting

Decision Tree

KNN

Linear Regression

Neural Network

Random Forest

Regularized Linear

SVM

MSE

0.187

0.31

0.000

0.23

1.000

0.293

0.293

0.214

RMSE

0.222

0.388

0.000

0.298

1.000

0.374

0.374

0.242

MAE / MAD

0.247

0.361

0.000

0.357

1.000

0.242

0.242

0.241

MAPE

0.100

0.107

0.000

0.477

0.658

0.750

0.750

1.000

0.650

0.370

1.000

0.510

0.000

0.841

0.841

0.248

 Q5. Lines 226–271: Explain the selection of particular algorithms. Why choose KNN over SVR or k-means? If performance was subpar, why use a neural network?

A5. The algorithms were selected by comparing various statistical indicators as indicated in the following paragraph:

The objective is to explore each model's predictive capacity as well as generalizability in predicting AI uptake in large firms in Europe. Mainly accompanied by model benchmarking, the section also includes a study on KNN-based feature importance using mean dropout loss for ranking macroeconomic factors according to their contribution toward AI uptake. Such analyses offer both methodological insight as well as policy-relevant evidence on structural economic variables conditioning AI diffusion in different national contexts. The comparative results on the regression models are provided in Table 3 below.

Table 3. Performance Comparison of Regression Algorithms Based on Standard Evaluation Metrics. All input data were normalized to ensure comparability across models.

In comparing the performances of eight regression models—Boosting, Decision Tree, K-Nearest Neighbors (KNN), Linear Regression, Neural Networks, Random Forest, Regularized Linear Regression, and Support Vector Machines (SVM)—our consideration is on the same five basic statistical measures of Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error/Mean Absolute Deviation (MAE/MAD), Mean Absolute Percentage Error (MAPE), and Coefficient of Determination (R²). These measures are essential indicators of how well the models fit, are stable, and generalize to unseen data. Lower values in MSE, RMSE, MAE/MAD, and MAPE, and the greater the R², the better the prediction accuracy, the stability of the model, and the more they generalize to unseen data. Of all the models tested, KNN shines with near-perfection in all the measures of evaluation with MSE, RMSE, MAE/MAD, and MAPE of 0.000 and R² of 1.000. This implies perfectly matching predictions of observed values with zero error. Though such kinds of performances are exceptionally rare in actual practical use and might be suggestive of overfitting, data leakage, or data with too low complexity, the results as they are set put KNN in the list of best performers and the top algorithm in this comparison. This is in accordance with the application of KNN in the environmental disciplines, like Raj and Gopikrishnan (2024), who showed how the algorithm performs in vegetation dynamics modeling, which emphasizes how the algorithm is effective with highly ordered, rich-feature data. The second-best is Boosting, which performs well with MSE of 0.187, RMSE of 0.222, MAE/MAD of 0.247, MAPE of 0.100, and R² of 0.650. These indicate that Boosting provides excellent balance of low deviation and decent explanation of variance, making it well suited for practical use, especially in complicated or more noisy environments. This is in accordance with time series finance use, like the work by Jenifel, Jasmine, and Umanandhini (2024), which employed Boosting in forecasting Bitcoin prices with successful results in noisy data. SVM performs reasonably well based on mean deviation with MAE/MAD of 0.241, better than Boosting and Random Forest. But it has the worst MAPE of 1.000 and thus greatly loses credibility in matters of percent-based precision, like that of financial prediction or health prediction. In addition, R² of 0.248 is quite low and represents little power to explain the dependent variable's variance. Such volatility in SVM is also witnessed in education analytics, where Kumah et al. (2024) observed such shortcomings in identifying nonlinear behavior in prediction of students' performance, especially with the involvement of categorical variables or in the case of badly scaled variables.

Conversely, Regularized Linear Regression and Random Forest have almost identical MSE of 0.293, RMSE of 0.374, MAE/MAD of 0.242, and R² of 0.841. However, both models have big errors in the form of MAPE (0.750), with poor relative prediction precision. Despite that, their big R², though not always linked with low MSE, reveals they are perhaps useful where capturing general trend, not specific values, is the objective. Such balance between measures based on errors and measures in explaining the variance has also been shown by Chandra, Vimal, and Rajak (2024) in comparing relative merits of different machine learning models employed in prediction of the production processes, where Random Forest was praised on trend matching but is criticized on the basis of sensitivity to outliers. Decision Tree is no better on the majority of the measures. Its MSE and RMSE (0.310 and 0.388, respectively) are among the largest, with MAE/MAD (0.361) and R² (0.370) of the same. Only on the measure of MAPE (0.107) is it decent, with slightly better relative error than Random Forest and SVM. Such frailties of Decision Tree models have also been shown by Vijayalakshmi et al. (2023) in prediction of medical insurance prices, where regression models yielded more stable relative performances in both the measures of absolute and percentage. Little better results are found in Linear Regression with MSE of 0.230, RMSE of 0.298, MAE/MAD of 0.357, and R² of 0.510. These are average measures and respectable balance between complexity and generalizability, though not great in any of the measures. Last, best of all the models (though still very poor) is the Neural Network with greatest possible MSE, RMSE, and MAE/MAD (all equal to 1.000) and lowest possible R² (0.000), to suggest that it is not able to learn any useful mapping of the features to the target. Its MAPE of 0.658 only supports this. Such low performance can be due to either poor optimization of the architecture, insufficient training data, or too deep of a network to be processed by the dataset. Balila and Shabri (2024) also show the same weakness in property price prediction, with deep models performing poorly with lesser simple models owing to over fit and data poor generalization.

Upon comparison of all models based on holistic interpretation of metrics, KNN is by far the best performer. It not only minimizes both types of errors and explains 100% of target data's variance. Yet, such flawless performance is suspicious on grounds of both overfitting and generalizability, especially if the model has memorized data instead of learning patterns. To confirm KNN’s performance thus, it would be crucial to validate it on hold-out test set or by cross-validation before it is implemented into production. Hypothetically, under the assumption of results' stability between different data partitions, KNN would be best to implement due to rock-bottom accuracy and zero-error metrics. Boosting is a strong second best in case both the robustness of the model and generalizability are more essential and with perfect prediction not. Then follow Regularized Linear Regression and Random Forest, which are similar (especially in explaining variance), though with relative errors that are greater. SVM’s rare combo of low MAE and high MAPE is less dependable in practical use where proportional errors are paramount. In this specific setting, Neural Networks should be avoided or heavily re-optimized to further improve learning. This is in support of research by Elnaeem Balila and Shabri (2024), warning of application of highly intricate models such as deep learning where simple programs are both accurate and reliable—as was the case with the prediction of property price in Dubai via traditional application of machinel learning techniques. In practical use in the world outside, not only numerical performance but also computational cost, scalability, interpretability, and sensitivity to noise need to be considered. KNN, as instance, is unlikely to be able to work with large data due to lazy learning and sensitivity to feature scaling. Boosting and Random Forest are scalable and robust but more computationally expensive. Linear models provide interpretability, very crucial in regulated fields such as medicine and finance though with marginally less favorable prediction capability. For instance, Zeleke et al. (2023) used Gradient Boosting to predict prolonged hospital stays and demonstrated how their strength and explanation of variance made it well suited to more complex, high-risk domains where interpretability was also of concern. Similarly, Kaliappan et al. (2021) observe that public health use case performance evaluations—like prediction of reproduction rate of COVID-19—must be more concerned with generalizability than optimality of errors and thereby confirm Boosting's second best in such use. In this use, optimal algorithm selection heavily depends on goals and limitations of the use case. On purely performance metrics here, however, KNN is plainly best performing, outperforming all else in all tested categories. Boosting is second best, giving a fast and stable mix of low errors and interpretability. Random Forest and Regularized Linear Regression both claim third place, excelling in explanation of variance but falling in relative precision. SVM and Decision Tree both perform in the middle ranks, with Linear Regression performing decently enough but not notably so. The Neural Network model, based on currently available performances, is best not implemented without extreme modification. The above observations are of utility as decision bases in optimal selection of models, hyper parameter optimization, and tuning of models in future endeavors in predictive modelling.

Q6. The cluster is being used well. To show country groups, include a visualization (such as a dendrogram or cluster heat map).

A6. We used the following images to visualize the cluster results.

 Q7. Add a research agenda for the future: such include the use of AI at the sectoral level, the function of national digital plans, and long-term impacts.

A7.  We have added the following sentences in the section “Discussions, Limitations and Future Research” 

 Research agenda for the future.  Although this study presents a powerful, multi-faced analysis of the macroeconomic and labor market factors behind the Europe-wide adoption of artificial intelligence among large firms, a variety of significant future research paths are left dangling. A first direction for future study would be to extend analysis to the sector level, which could provide more detailed information on the heterogeneity of artificial intelligence adoption across various sectors, including, e.g., manufacturing, healthcare, and finance, which are themselves rapidly being digitalized. Sector-specific factors, such as technology intensity, institutional settings, i.e., regulation, and labor demand for specific skills, could uncover differentiated diffusion of artificial intelligence not reflected through aggregated national data. Second, subsequent studies must explore national digital plans and policy frameworks to derive insights on the shaping of the outcomes of AI adoption and readiness. It is worth studying the heterogeneous effects of the digitalisation plans of EU member nations, infrastructure investment, and sophistication of regulation, most particularly on innovation system consistency and institutional capacity. Third, attention must be given to the long-run, path-dependent nature of AI adoption. AI is not a sudden tech jump but a long-run transformational process. Scholarship must study, longitudinally, the impact of early adoption of AI on firm productivity, labor reorganization, value chain reconfiguration, not least in previously unexamined sectors such as agriculture, logistics, or cultural industries. Furthermore, subsequent studies can use firm- and subnational data to account for intra-country diversity. This can be useful to better model localized innovation ecosystems, skills clusters, and institutional enablement to influence adoption outcomes. Multilevel modeling methods can be used to analyze national, regional, and firm-specific interactions. Finally, a comparative research agenda comparing innovation ecosystems and venture capital maturity would illuminate financing arrangements that enable or disable scaling of innovations of AI. This is most relevant for Europe, where diversified venture market structures and a scarcity of public-private collaboration continue to hinder technology commercialization. Comparisons from high-growth hubs to stalled peripheries can discern paths for transcending adoption divides, towards a more balanced and resilient digital transition for the EU.

 Q8. The literature on AI economics, macro-innovation, and policy design has greatly benefited from this paper.

A8. Thanks dear reviewer 5.

Q9. The paper will be considerably more rigorous and comprehensible if the overfitting issues are resolved and some structural and visual clarifications are added. Clarifying explanations will also be beneficial.

A9. As indicated in the previous point, the r-squared value of 1 is due to the normalization of the results. Three images were identified to display the clustering results.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The authors have made high-quality edits to the article, taking into account the comments received. At the same time, there are still a few points that need to be corrected.
1. When describing the theories in lines 233-317 after the sentence “It draws on three interconnected theoretical orientations:”, it is not advisable to make a list in this format. The following stylistic approach can be used. The introductory sentence “It draws on three interconnected theoretical orientations” should end with a period (...orientations.) instead of a colon. Then, each theory should be described in a separate paragraph, aligning the text by width. If desired, the paragraphs can be numbered (1., 2., 3.).
2. The formula in line 520 should be assigned a sequential number.
3. In lines 1058-1059, the authors write “Table 1. Performance Comparison of Regression Algorithms Based on Standard Evaluation Metrics. All input data were normalized to ensure comparability across models.” However, this is followed by a figure, which, in turn, has no title. These errors need to be corrected.
4. On page 39, in Figure 5. Optimal number of clusters with the Elbow Method, the font size of the axis labels appears too large compared to the overall style of the article. It is recommended to reduce the font size to ensure the visual harmony of the graph.

Author Response

POINT TO POINT ANSWERS TO REVIEWER 2

The authors have made high-quality edits to the article, taking into account the comments received. At the same time, there are still a few points that need to be corrected.


Q1. When describing the theories in lines 233-317 after the sentence “It draws on three interconnected theoretical orientations:”, it is not advisable to make a list in this format. The following stylistic approach can be used. The introductory sentence “It draws on three interconnected theoretical orientations” should end with a period (...orientations.) instead of a colon. Then, each theory should be described in a separate paragraph, aligning the text by width. If desired, the paragraphs can be numbered (1., 2., 3.).

 A1. The section has been reorganized as follows:

  1. AI Readiness in Context: Integrating Macroeconomic and Labor Market Structures into a Framework for Enterprise Innovation

Larger company adoption of Artificial Intelligence (AI) is now better understood as a result not just of firm-specific preparedness but of broader macroeconomic and labour market configurations. This approach places macroeconomic configuration and labour market characteristics—involving unemployment, labour quality, and institutional flexibility—in a prominent role for shaping the environment wherein larger companies opt to adopt AI. This underlying assumption is that national contexts bestow or withhold enabling systemic preconditions for the integration of AI technologies, that these systemic preconditions operate themselves through various mediating channels: financial liquidity, exposure to trade, direction of investment, and labour flexibility. While sectoral competition, internet infrastructure, and managerial foresight are pertinent at company level, macro-level constraints of a structural unemployment, capital allocation distortions, or openness to trade enjoy a major impact on the ability of companies to undertake a technology transformation decision. This theoretical model draws on extant traditions of national systems of innovation, macro-labour economics, and structuralist development thinking to create a parsimonious but integrated description of the interplay of macroeconomic empirical variables and labour market movements on the adoption of AI amongst larger European firms.

It draws on three interconnected theoretical orientations.

3.1 Innovation Systems Theory

Artificial intelligence use among the European countries can be realized through a theoretical lens framed on the basis of innovation systems theory. This conceptual framework regards innovation not a univocal event but a consequence of diversified interplays amongst firms, public institutions, labour market structures, and macroeconomic environments (Freeman, 1987; Lundvall, 1992). A system’s ability to generate and diffuse innovation depends not merely on technology availability but, equally, on institutional environment quality and structural national economy cohesion. Based on this framework, macrostability, openness of exchange, health expenditure, and credits availability are primitive factors that dictate the systemic composition of innovation (Gama & Magistretti, 2025). Concurrently, the labour market structure—inferred through formal jobs, precarious work, and industrial labour—functions either favourably or hinderingly the diffusion of new-emerging technology. Those economies with stable, regulated, and industrially integrated labour systems are more capable of absorbing innovation and grounding it towards sustaining transformative development (Cannavale et al., 2022). This framework, theoretically, encapsulates institutional and structural viewpoints beneath a similar analytical frame, which gives prominence to the complementarity of public policy, labour flows, and technology investment (Purnomo, 2023). This systemic logic illuminates, further, differential paths observed amongst countries with equally similar economic features, giving prominence to internal coherence of the system constituents for widespread and sustainable usage of AI. Lastly, technology adoption comes to be realized as an emergent consequence of a national innovation system, rather than being a mere linear function of economic means or of sheer technology capacity.

3.2 Resource-Based View (RBV) of a Macroeconomic Context

Long applied at the firm level, the Resource-Based View (RBV) gains useful explicatory power when generalized to the broader macroeconomic environment of digital transformation. In the bigger picture, national resources like credit availability, trade openness, and technology infrastructure investment are critical firm-level enablers of capacity. The analysis of artificial intelligence (AI) adoption of the article applies this schema, underscoring structural economic determinants of firms’ capacity to utilize early-stage digital technologies. In accordance with the logic of the RBV, the firm can create competitive advantage from having valuable, rare, inimitable, and not substitutable resources. When macroeconomic structures supply these preconditions—through stable availability of finance, market openness, and government technology infrastructure expenditure—they act like systemic resources that supplement internal resources of firms (Stroumpoulis, Kopanaki, & Varelas, 2022; Jiang, Xuan, & Zhang, 2024). For instance, countries having high availability of credit enable firms to invest long-term for adopting and internalizing AI, while open regimes of trade enable acquiring of digital technology, including externalities, from outside countries. In investments of infrastructure, more so, on digital connectivity infrastructure, combined with publicly incurred expenditures on research, lower even more innovation obstacles and accelerate rapid technology diffusion (Khan, Mehmood, & Ali, 2024). The article validates that these macro-structural variables are not just backdrops but active technology transformation agents. Distinguishing country groupings of various adopting AI levels, consistent with country difference of regimes of credit, openness, and government expenditure, reinforces the RBV emphasis of resource richness of environment. Firms operating from macroeconomically supportive settings are better off mobilizing internal capacity, including dynamically responding to new opportunity of AI. In sum, the article announces a macro-level extension of the RBV, where national economic settings constitute resource pools whose dimensions condition firms’ adaptive capacities. This reinforces the widespread sentiment that diffusion of AI success relies not only on company strategy but equally, if not more so, on external resource environment.

3.3. Labor Market Adjustment Theory

The Labor Market Adjustment Theory provides a critical lens through which to analyze national variation in artificial intelligence adoption, examined in the article. Based on this theory, the flexibility of the economy to technology change depends significantly on the labor market setup of a country—specifically, unemployment scenarios and labor market flexibility. High levels of unemployment combined with labor market rigidity are usually a reflection of structural weakness or institutional resistance, which can impede a country from shifting towards new technologies like AI (Liu, 2024). Employment-related variables—wage employment, vulnerable employment, industrial employment—are stressed in the article as reflections of labor market health, flexibility, and overall resilience. Formal-intensive economies with a lower share of vulnerable or informal labor are more likely to record high levels of AI technology absorption. This observation confirms the theoretical hypothesis that labor markets able to re-shape the workforce more effectively are better positioned to accommodate technology shocks and restructure work arrangements to accommodate AI (Song, 2024). Moreover, labor market flexibility (operationalized via the hiring re-skilling facilitating, wage elasticity, mobility of labor) remains a deciding factor for businesses and economies to shift towards digital transformation imperatives (Dave, 2024). In more flexible systems, technology replacement of certain jobs can be offset through brisk entries into new professions, which diminish social resistance, enable better diffusion of innovations. This cluster analysis, introduced in the article, highlights these relationships, which reflect a scenario where countries that benefit from low levels of unemployment and relatively stable labor market arrangements are more accepting not only of absorbing but are more efficient at sustainably absorbing these technology deployments. Thus, these results empirically verify Labor Market Adjustment Theory: structural flexibility, labor market resilience are the major predictors of a country’s ability to shift towards the new-age, AI-led economy. Labor markets are not passive sites but are active agents of technology progress.

Overall, this combined framework demonstrates that the grand-scale enterprise adoption of artificial intelligence (AI) is not merely the result of internal capabilities or sectoral demands, but rather is profoundly embedded in the broader macroeconomic, institutional, and labor market configurations of each national economy. Drawing on Innovation Systems Theory (Arroyabe et al., 2024), the Resource-Based View of the macroeconomic level (Li et al., 2025), and Labor Market Adjustment Theory (Sultana et al., 2024; Wang & Jiao, 2025), the model captures the multi-dimensional character of AI diffusion amongst the economies of Europe. It demonstrates how technology transformation is anchored through complementarity amongst national innovation ecosystems, systemic availability of resources, and labor market adaptive capacity. Each of these theoretical lenses, in turn, assists in unearthing new insight regarding the interrelationship between macro-conditionality (such as institutional convergence, financial liquidity, and labor market flexibility), on the one hand, and firm-level innovation decisioning, on the other. Empirical corroboration from the survey substantiates this integration, discovering distinctive national groupings possessing differential absorbing capacity for scaling AI technologies (Arroyabe et al., 2024). This, in turn, reiterates the importance of formulating policy responses which not only intensify firm-level digital preparatory capacity but which, through the creation of the enabling macroeconomic, institutional base, can facilitate widespread, therefore inclusive, adoption of AI. Ultimately, effective diffusion of AI flows not from discrete technology advance but from structural alignment of economic governance, institutional delimitation, and labour market functioning.

3.4 Hypothesis Formation: Linking Macroeconomic and Institutional Contexts to AI Adoption

These macro level factors not only influence the motivation of firms to invest in AI but, more basically, their capacity to scale and internalize such technologies successfully. Drawing from this theoretical foundation, the study derives the following hypothesis:

H1: In Europe, large firm adoption of artificial intelligence (AI) is associated with national macroeconomic stability, institutional consistency, and labour market flexibility, beyond the capacity level of firms.

A2. The formula in line 520 should be assigned a sequential number.

Q2. The formula has been renumbered as follows:

                                      [1]

Accordingly, the following formulas in the text have also been renumbered:

                                                                          [2]

 

            [3]

Q3. In lines 1058-1059, the authors write “Table 1. Performance Comparison of Regression Algorithms Based on Standard Evaluation Metrics. All input data were normalized to ensure comparability across models.” However, this is followed by a figure, which, in turn, has no title. These errors need to be corrected.

A3. The figure has been renamed as follows:

Figure 1. Performance Comparison of Regression Algorithms Based on Standard Evaluation Metrics. All input data were normalized to ensure comparability across models.

Q4. On page 39, in Figure 5. Optimal number of clusters with the Elbow Method, the font size of the axis labels appears too large compared to the overall style of the article. It is recommended to reduce the font size to ensure the visual harmony of the graph.

A4. The figure has been remade as follows:

Reviewer 3 Report

Comments and Suggestions for Authors

Dear Authors,

Thanks for addressing the comments in good details.

Author Response

POINT TO POINT ANSWERS TO REVIEWER 3

Q1. Dear Authors, Thanks for addressing the comments in good details.

A1. Thanks dear reviewer.

Reviewer 5 Report

Comments and Suggestions for Authors

The authors have adequately addressed the major revision comments, and the article is now suitable for acceptance in its current form.

Author Response

POINT TO POINT ANSWERS TO REVIEWER 5

Q1. The authors have adequately addressed the major revision comments, and the article is now suitable for acceptance in its current form.

A1. Thanks dear reviewer 5.

Back to TopTop