The Innovation Lifecycle of AI-Driven Agriculture: Causal Dynamics in University–Industry–Research Collaboration
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe paper presents a study on the innovation lifecycle of AI-driven agriculture. It focuses on causal dynamics in university-industry research collaboration.
My general question to the authors is a clear definition of the university-industry-research collaboration. I wrote on purpose university-industry research collaboration as I suppose the study focused on the research collaboration among universities (other research institutions) and industry.
Research question: In my opinion R2 is a general question that encompasses all the other ones, and it can be seen as the overall question of the study.
Abstract: In my opinion this section needs modifications. I do not think that in an abstract there is room for such a general statement as in the sentence “The implications of this study are significant, providing insights that could 24 guide strategic planning and investment in AI applications in agriculture”. The abstract seems to be too long and there is no mention of the data source, time span and geographical focus of the study which makes the abstract incomplete.
Introduction: In this section the authors presented the study’s background, research questions and methods applied as well as the paper structure.
Literature review: This section presents a more detailed study background focusing on the previous research studies.
Proposed methodology: This section presents a more detailed description of the methodology applied in the study. In table 2 the authors present the data sources but they do not give the time span included in the study. In figure 1 the authors clearly presented the study’s approach.
In this section the authors also presented the results of the applied procedure.
Results and discussions: In figure 3 the authors finally showed the time span for research articles. Yet, it is not clear how it relates to other data used in the study and if there is any rationale to have in the study articles from 1985. The results show that the time span for other data is the same. Is there any rationale for this? Is there no delay in the project/patents following initial publication?
Figure 7 clearly shows that there is significant change starting in 2015 (or 2010 in the last element of this figure). Why not to focus the analysis on this period?
In my opinion the “discussions” part of this section is too limited as it includes only an analysis of the study results. There should be comparison with similar studies like ones related to other fields of university-industry research collaboration.
In sub-section 4.5 the authors presented the study limitations. Here also the authors did not mention the time span of the data used.
Conclusions: In this section the authors summarized the study. In my opinion they should add a paragraph on practical (policy) implications of their findings.
Author Response
Please see the attachment.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe study maps the innovation lifecycle at the AI×Agriculture intersection across four “pillars”—academic articles (Web of Science), EU projects (CORDIS), patents (WIPO), and start-ups (Crunchbase)—and builds annual time series (1985–2023). It narrows queries via a keyword method, classifies records into six agri sub-domains using zero-shot NLP (BART vs DeBERTa), applies several smoothing techniques, detects knee points (second derivative vs maximum curvature), and tests temporal relations with Granger causality, VAR-based impulse responses, and cross-correlations. Main claims include a “sequential progression” (start-ups → projects → articles → patents) and mixed causal/response patterns between pillars. Here some comments in orde to improve the overall quality of the work provided.
Align the storyline across RQs, methods, and conclusions
Issue. The RQs and lifecycle diagram suggest an upstream flow from academia → projects → patents → start-ups, whereas results/conclusions emphasize a sequence with start-ups emerging first and also claim articles predict patents/start-ups; IRFs further report a negative effect from articles to patents. This reads as internally inconsistent.
Make research questions testable hypotheses and map them to analyses
Issue. RQ2–RQ6 are clear but not operationalized with directional/lagged hypotheses or acceptance criteria.
Data transparency & reproducibility: fully specify queries and construction rules
Issue. Table 2 lists sources but omits exact queries, retrieval dates, filters, family de-duplication for patents, and Crunchbase coverage caveats; projects are counted as “active per year” but this rule isn’t formalized.Action. In a new “Data Appendix,” provide the exact query strings (per pillar), pull dates, API/endpoints, inclusion/exclusion rules, and how yearly counts are formed (e.g., projects by active year). Clarify WIPO scope (applications vs grants; family handling). Publish CSVs and notebooks (Git repo) referenced in Data Availability.
Clarify the “query relevance” objective function and run robustness checks
Issue. You mention an “objective function” to evaluate keyword relevance but do not define it or report thresholds.
Define the metric (e.g., precision@k from a hand-labeled validation set), show sensitivity to keyword variants (± terms like “smart farming”, “agritech”), and report how results (knee years, Granger links) change under alternative queries.
Preprocessing & smoothing: justify parameters and guard against spurious dynamics
You apply SMA(3), EMA/SES(12), LOWESS(frac=0.1), Gaussian(k=5) and then pick “original for articles, SMA for projects/patents/start-ups” by visual inspection; there’s no stationarity or pre-whitening check before causality tests.
Causality reporting: tabulate statistics and add VAR diagnostics/IRF CIs
Issue. Granger results are summarized as a network; IRFs and CCFs are described qualitatively; diagnostics (AIC/BIC values, stability, residual tests) and statistical bands are missing.
Action. Add: (i) a table of Granger p-values by pair×lag (1–5), with multiplicity control; (ii) selected VAR lag by AIC/BIC (numerical values); (iii) LM tests for residual autocorrelation, roots of companion matrix; (iv) IRFs with bootstrap confidence bands; (v) CCF plots with significance envelopes. Explicitly discuss the apparent conflict “articles → patents (Granger) vs negative article shock on patents (IRF)” as different questions (predictability vs dynamic impulse response).
Editorial and references cleanup (MDPI style & correctness)
Issue. Minor language issues (e.g., “increasement,” “pilled”) and a mis-referenced WIPO source (“WIPO PatentScope Simple Search” currently points to an APA style page)
Moreover, sentences like “[...] in education and curiosity-led research, as [1] investigated. [...]” must, in all cases, include the author’s name; in this case, the correct form is: “[...] in education and curiosity-led research, as Gallagher et al. [1] investigated. [...]”. This occurs throughout the text, so the phrasing needs to be revised accordingly..
Author Response
Please see the attachment.
Author Response File:
Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe study addresses a current topic with a significant lack of knowledge, interconnecting Artificial Intelligence, agriculture, and innovation, areas with a strong potential impact on the SDGs and innovation policy.
The work is well structured, but the text has some deficiencies in the English language. There is some redundancy, and some sentences are very long, so a linguistic review is recommended.
In terms of the literature review, a more critical analysis is recommended, as it is considered to be highly descriptive. Is it necessary to highlight the objectives and the gaps to be filled with this work?
The methodology is robust. The adoption of Granger causality, impulse response functions, and time series analysis reflects innovation in the context of digital agriculture.
However, we consider the description to be too detailed and technical, especially in the research query and smoothing techniques sections, which may distract the reader. I recommend creating appendices to provide these details. Regarding the discussion of results, it will be necessary to connect them with the findings of the literature review and the theories supporting the study, as these would enhance the study.
Regarding the study specifications, the constraints that affect the generalization of the results, particularly the techniques used, should be reflected. Recommendation
In short, the study has academic value, but it requires fundamental reformulation in terms of the critical literature review, methodological clarity, greater theoretical articulation in the discussion of results, and the presentation of limitations.
Author Response
Please see the attachment.
Author Response File:
Author Response.pdf
Reviewer 4 Report
Comments and Suggestions for AuthorsThe research paper “The Innovation Lifecycle of AI-driven Agriculture: Causal Dynamics in University-Industry-Research Collaboration" explores the innovation lifecycle of AI-driven agriculture field through a causal inference analysis within University-Industry-Research ecosystem, extracting four key pillars: scientific research papers, research projects, patents and startups.
This study presents an ambitious framework for analyzing innovation dynamics in AI-driven agriculture through a causal inference approach. It uses four data sources: research papers, research projects and start-ups. The paper uses multiple databases (Web of Science, Cordis, WIPO, Crunchbase) and employs time-series smoothing, knee-point detection, and Granger causality analysis to reveal sequential relationships among these sources. Though, to merge these databases into one framework is a novel approach, these data sources differ radically in scope and bias. For example, Crunchbase is not moderated in terms of scientific approach. The paper does not adequately justify normalization across them.
The “innovation lifecycle” term is still ambiguous and requires the definition.
The paper is quite long (27 pages) and in some parts descriptive only with statements like “In the present work we have investigated the intersection between two domains of maximum impact in nowadays evolution, namely artificial intelligence and agriculture” (897). Two domains of maximum impact in nowadays evolution, namely artificial intelligence and agriculture – this statement is very ambitious and needs verifications.
Some references are not correct, e.g., "Web of Science, 2024, Retrieved from https://www.webofscience.com/wos/woscc/basic-search"/
Thus, I would suggest major revisions.
Author Response
Please see the attachment.
Author Response File:
Author Response.pdf
Round 2
Reviewer 3 Report
Comments and Suggestions for Authorswould like to thank you for your careful and comprehensive revision of the manuscript. It is evident that you have made a significant effort to address all the comments provided in the previous review round. The paper now presents greater clarity, structure, and theoretical articulation.
The literature review has been considerably strengthened, moving beyond a merely descriptive perspective to a more critical and analytical discussion. The methodological exposition is now clearer and more accessible, and the integration of appendices for the technical details was an appropriate choice. Moreover, the enhanced discussion linking the empirical findings with the theoretical frameworks—particularly the Triple Helix and innovation system perspectives—adds substantial value.
Overall, the manuscript now shows a more mature and coherent approach, demonstrating scientific rigor and a solid contribution to understanding AI-driven innovation in agriculture. I recognize its merit and potential for publication.
Reviewer 4 Report
Comments and Suggestions for AuthorsThe authors have adequately addressed the reviewer's comments.

