Next Article in Journal
Sensor-Drift Compensation in Electronic-Nose-Based Gas Recognition Using Knowledge Distillation
Previous Article in Journal
New Concept of Digital Learning Space for Health Professional Students: Quantitative Research Analysis on Perceptions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Validation–Deployment Gap in Agricultural Information Systems: A Systematic Technology Readiness Assessment

by
Mary Elsy Arzuaga-Ochoa
*,
Melisa Acosta-Coll
and
Mauricio Barrios Barrios
Department of Computer Science and Electronics, Universidad de la Costa CUC, Barranquilla 080001, Colombia
*
Author to whom correspondence should be addressed.
Informatics 2026, 13(1), 14; https://doi.org/10.3390/informatics13010014
Submission received: 5 October 2025 / Revised: 9 November 2025 / Accepted: 21 November 2025 / Published: 19 January 2026

Abstract

Agricultural marketing increasingly integrates Agriculture 4.0 technologies—Blockchain, AI/ML, IoT, and recommendation systems—yet systematic evaluations of computational maturity and deployment readiness remain limited. This Systematic Literature Review (SLR) examined 99 peer-reviewed studies (2019–2025) from Scopus, Web of Science, and IEEE Xplore following PRISMA protocols to assess algorithmic performance, evaluation methods, and Technology Readiness Levels (TRLs) for agricultural marketing applications. Hybrid recommendation systems dominate current research (28.3%), achieving accuracies of 80–92%, while blockchain implementations (15.2%) show fast transaction times (<2 s) but limited real-world adoption. Machine learning models using Random Forest, Gradient Boosting, and CNNs reach 85–95% predictive accuracy, and IoT systems report >95% data transmission reliability. However, 77.8% of technologies remain at validation stages (TRL ≤ 5), and only 3% demonstrate operational deployment beyond one year. The findings reveal an “efficiency paradox”: strong technical performance (75–97/100) contrasts with weak economic validation (≤20% include cost–benefit analysis). Most studies overlook temporal, geographic, and economic generalization, prioritizing computational metrics over implementation viability. This review highlights the persistent validation–deployment gap in digital agriculture, urging a shift toward multi-tier evaluation frameworks that include contextual, adoption, and impact validation under real deployment conditions.

1. Introduction

Global food systems face an unprecedented paradox: while agricultural production has increased substantially over recent decades, distribution inefficiencies persist—particularly affecting smallholder farmers in developing economies. Post-harvest losses reach 30–40% of total production, translating into approximately USD 310 billion in annual waste [1,2]. Beyond physical losses, structural asymmetries within agricultural value chains leave smallholder farmers capturing only 15–30% of the final product value, while intermediaries appropriate 70–85% [3,4]. This inequitable distribution perpetuates rural poverty, discourages agricultural investment, and undermines food security, especially where agriculture represents the main livelihood for over 60% of the population.
Traditional agricultural marketing systems remain constrained by excessive dependence on intermediaries, limited bargaining power among fragmented producers, opaque pricing mechanisms, and inadequate traceability that hampers food safety verification. Infrastructural deficits in storage, logistics, and digital connectivity further exacerbate these inefficiencies [5,6,7].
The emergence of Agriculture 4.0—encompassing Blockchain for transparent transactions, artificial intelligence (AI) and machine learning (ML) for predictive analytics, Internet of Things (IoT) for real-time monitoring, and intelligent recommendation systems for decision support—has inspired optimism about transforming these entrenched dynamics [8,9,10]. Advocates contend that these technologies can reduce information asymmetries, eliminate intermediary dependence, enhance traceability and food safety, and empower farmers through data-driven marketing strategies [11,12].
Existing evidence suggests promising applications. Blockchain platforms such as IBM Food Trust and Hyperledger Fabric have demonstrated end-to-end supply chain traceability capabilities [13,14]. IoT sensor networks combined with cloud computing enable real-time monitoring of product quality, storage conditions, and logistics parameters [15]. Hybrid recommendation systems powered by AI algorithms provide personalized guidance on crop selection, input optimization, harvest timing, and market strategies based on soil conditions, climate patterns, and demand forecasts [16]. These technologies theoretically address critical marketing bottlenecks while potentially democratizing access to market information and competitive opportunities.
Nevertheless, prior studies have concentrated primarily on production-oriented applications such as precision agriculture [17,18], crop management [19,20,21], and digital farming platforms. These works overlook the commercialization phase—the critical juncture where production meets the market and value is distributed among actors. Agricultural marketing optimization thus represents a distinct research domain requiring tailored digital frameworks and evaluation strategies.
Addressing this gap is crucial because marketing inefficiencies, rather than production limits, now constitute the binding constraint on smallholder prosperity. Despite promising technical achievements, much of the literature demonstrates technological optimism without sufficient consideration of implementation feasibility, scalability, or socioeconomic equity. High reported accuracies and efficiencies often lack validation under real-world, resource-constrained conditions.
Accordingly, this Systematic Literature Review (SLR) examines 99 peer-reviewed studies (2019–2025) from Scopus, Web of Science, and IEEE Xplore to assess how Agriculture 4.0 technologies—particularly Blockchain, AI/ML, IoT, and recommendation systems—optimize agricultural marketing. The review adheres to PRISMA protocols and evaluates both computational performance and implementation readiness, with special emphasis on context involving smallholder farmers.
The study is guided by the following four unified research questions:
RQ1: What digital technologies and methodological approaches have been employed to optimize agricultural marketing between 2019–2025, and what is their relative prevalence in the academic literature?
RQ2: What performance metrics and impact indicators have been reported for these technologies, and how robust is the supporting evidence?
RQ3: What Technology Readiness Levels (TRLs) characterize existing implementations, and what explains the gap between laboratory validation and real-world deployment?
RQ4: What barriers hinder successful technology adoption in agricultural marketing, and how do these vary across regions and production scales?
By addressing these questions, this review contributes in three major ways. First, it provides the first focused synthesis on agricultural marketing optimization, bridging a critical gap in technology-driven agricultural research. Second, it applies a multi-dimensional assessment encompassing technical, economic, and equity dimensions to move beyond purely computational validation. Third, it offers evidence-based insights for researchers, policymakers, and practitioners to align digital innovation with inclusive agricultural development objectives.

2. Methodology

This study employs a Systematic Literature Review (SLR) following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) protocol. Unlike traditional narrative reviews, the SLR approach ensures a structured, transparent, and reproducible process for identifying, evaluating, and synthesizing existing research, thereby minimizing bias and enhancing reliability.
This section details the research design, search strategy, selection criteria, quality assessment, and analytical framework used to ensure methodological rigor and reproducibility.

2.1. Research Design and Guiding Framework

The SLR was structured around four unified research questions (RQ1–RQ4) formulated in the Introduction, which guided database selection, search string development, inclusion/exclusion criteria, and analytical categorization.
A mixed-methods design was adopted, integrating quantitative bibliometric analysis (publication trends, technology frequency, and performance metrics) with qualitative thematic synthesis (barrier identification, adoption analysis, and equity implications).
To ensure systematic and reproducible analysis, a predefined review protocol was established specifying the following:
  • Selected databases and justification for inclusion.
  • Boolean search strings using controlled vocabulary.
  • Temporal and language boundaries.
  • Inclusion/exclusion criteria with operational definitions.
  • Data extraction variables aligned with research questions; quality assessment criteria adapted from CASP (Critical Appraisal Skills Programmed) [22]
Additionally, three multidimensional analytical matrices were constructed:
  • Technology–Performance Matrix, mapping technologies to evaluation metrics (accuracy, latency, scalability, cost).
  • Implementation–Maturity Matrix, linking Technology Readiness Levels (TRLs) with validation evidence and field deployment duration.
  • Barrier–Opportunity Matrix, quantifying adoption constraints and enabling factors extracted from the discussion sections of each paper.
Each matrix was piloted on ten sample studies and refined iteratively to ensure category consistency. Inter-rater reliability for the pilot coding achieved κ = 0.81 (substantial agreement).

2.2. Database Selection and Justification

Three specialized databases were selected for coverage, quality, and relevance:
  • Scopus (Elsevier): Broad peer-reviewed coverage in agriculture, computer science, and economics (~27,000 journals) [23].
  • Web of Science (WoS): Emphasizes high-impact journals with rigorous peer review and citation tracking [24].
  • IEEE Xplore: Provides technical depth on IoT, blockchain architectures, machine learning models, and recommendation systems [25].
These databases were prioritized over CAB Abstracts, AGRIS, or Google Scholar to ensure quality-controlled, peer-reviewed records with structured metadata and export functionality for bibliometric analysis.

2.3. Search Strategy and Query Development

Search queries were developed iteratively through pilot searches and refined based on relevance assessment of the initial results. The final Boolean search strings combined controlled vocabulary terms with natural language keywords, structured as follows:
  • (“blockchain” OR “IoT” OR “artificial intelligence”) AND (“agriculture” OR “agricultural marketing”);
  • (“e-commerce” OR “digital agricultural markets”) AND (“sustainability” OR “supply chain efficiency”);
  • (“agricultural marketing strategies” OR “business models” OR “local markets” OR “collaborative economy”) AND (“trends” OR “digital transformation” OR “agri-food innovation”);
  • (“emerging agricultural technologies” OR “digital transformation in farming”) AND (“market access” OR “efficient commercialization” OR “agricultural logistics”).
Searches were conducted in the title, abstract, and keyword fields to balance precision and recall. Pilot testing revealed that full-text searches generated excessive false positives (documents mentioning technologies tangentially without substantive analysis). Search strings were executed independently in each database during March–April 2025.

2.4. Inclusion and Exclusion Criteria

Articles were evaluated against predefined criteria applied sequentially during screening stages:

2.4.1. Inclusion Criteria

  • Publication type: Peer-reviewed journal articles, conference proceedings (IEEE), and systematic reviews.
  • Language: English (due to resource constraints and research team linguistic capabilities).
  • Temporal scope: Published between January 2019 and April 2025 (capturing recent developments while maintaining currency).
  • Thematic relevance: Primary focus on digital technologies (Blockchain, AI/ML, IoT, recommendation systems, Big Data) applied to agricultural marketing, commercialization, distribution, or market access.
  • Methodological standards: Empirical studies, case studies, systematic reviews, or conceptual frameworks with clear methodology.
  • Accessibility: Full-text available through institutional subscriptions or open access repositories.

2.4.2. Exclusion Criteria

  • Publication type: Books, dissertations, editorials, preprints without peer review.
  • Thematic scope: Studies lacking computational/algorithmic contribution or implementation details.
  • In this review, the criterion “lack of clear methodology” was applied when an article did not specify its data sources, analytical procedures, or validation approach. During full-text screening, studies that described technological proposals without explaining how results were obtained or evaluated were excluded. This decision process was aligned with the quality assessment described in Section 2.6 to ensure consistency.
  • Duplication: Multiple publications of identical research (retained most comprehensive version).

2.5. Selection Process and PRISMA Flow

The selection process followed PRISMA guidelines through four sequential stages (Figure 1):
Stage 1: Identification—initial database searches (March–April 2025) retrieved 364 records: Scopus (150), Web of Science (120), IEEE Xplore (94).
Stage 2: Screening Duplicate removal identified 120 duplicates, leaving 244 remaining articles for screening.
Stage 3: Eligibility assessment—title and abstract screening of the 244 records resulted in 128 complete articles selected for full-text evaluation and 116 articles excluded based on title/abstract review. Two independent reviewers conducted screening, with discrepancies resolved through discussion.
Stage 4: Full-text evaluation and final inclusion—the 128 complete articles were assessed against detailed criteria. After full-text review, 99 articles met all inclusion criteria and were included for data extraction and synthesis.

2.6. Quality Assessment

To distinguish robust evidence from preliminary findings, the included articles underwent quality assessment using adapted CASP criteria [26] tailored for technology and agricultural economics research. Quality assessment was conducted by two independent reviewers for 30% of the articles (n = 30), with inter-rater reliability calculated using Cohen’s Kappa (κ = 0.76, indicating substantial agreement) [27]. The remaining articles were assessed by single reviewers with periodic cross-checking.
Quality Dimensions (each scored 1–5):
  • Research design clarity: Explicit methodology, clear research questions, appropriate methods for objectives.
  • Data quality: Sample size adequacy, data source transparency, measurement instrument validity.
  • Analytical rigor: Statistical techniques appropriateness, result reproducibility, explicit limitation acknowledgment.
  • Implementation maturity: Technology Readiness Level (TRL) reporting, field validation evidence, deployment assessment beyond prototype.
  • Economic consideration: Cost analysis presence, economic viability discussion, farmer perspective inclusion.
  • Overall quality scores (5–25 points) enabled categorization:
    • High quality (20–25 points): 15 articles (15.2%)—robust methodology, field validation, economic analysis.
    • Medium quality (15–19 points): 62 articles (62.6%)—clear methodology, limited field validation, minimal economic assessment.
    • Low quality (10–14 points): 22 articles (22.2%)—preliminary findings, prototype-only, absent economic analysis.
Quality scores were used descriptively to contextualize findings rather than exclude articles, as even lower-quality studies contribute to understanding research trends and gaps.

2.7. Data Extraction and Synthesis

A structured data extraction and coding matrix was developed in Microsoft Excel 365, designed to ensure consistency and traceability in the transformation of full-text articles into analyzable variables. The matrix aligned each extracted variable directly with the corresponding research question (RQ1–RQ4). Variable definitions and decision rules were documented in a data dictionary to ensure replicability. The construction of composite performance and viability scores is described in detail in Appendix A.
Bibliometric Variables (RQ1)
These variables were used to examine research volume, disciplinary orientation, and publication patterns:
Publication year, journal name, and journal quartile (Scimago Journal Rank, SJR).
Author institutional affiliation and geographic origin (classified by region and economic development level).
Citation count (Google Scholar, accessed December 2024).
Study type (empirical, conceptual, review, case study, field trial, bibliometric).
Technology Variables (RQ1, RQ2)
These variables captured the nature and purpose of technological implementation:
Technology category: Blockchain, AI/ML, IoT, Recommendation Systems, Big Data/Cloud, AR/3D, or Hybrid configurations.
Primary application domain: Crop recommendation, yield prediction, price forecasting, traceability, quality grading, logistics optimization, or market access enhancement.
Research design: Experimental, simulation, case study, survey-based evaluation, field deployment, or mixed-methods design.
Reported performance metrics: Accuracy, Precision, Recall, F1-score, R2, RMSE, MAE, latency, throughput, or equivalent indicators. Latency and throughput were included because several reviewed systems operate in real-time environments (e.g., IoT monitoring and blockchain transactions). In these cases, performance is not determined solely by predictive accuracy but also by the system’s ability to respond efficiently under operational conditions.
Implementation and Maturity Variables (RQ3)
These variables supported evaluation of deployment readiness:
Technology Readiness Level (TRL) assessed using the NASA scale (1–9) adapted for agricultural systems, where adaptation accounted for production seasonality, supply chain integration, and actor coordination requirements.
Implementation stage: Prototype, Pilot (<50 users, <6 months), Operational (>50 users, 6–12 months), or Sustained Implementation (>12 months).
Deployment environment: Laboratory, simulation sandbox, controlled field pilot, or real-world operational deployment.
Supply chain coverage: Pre-production, production, post-harvest, distribution, commercialization, or end-to-end integration.
Geographic and Scale Variables (RQ1)
Region of design and/or deployment (Asia, Europe, Africa, North America, Latin America, Oceania).
Target deployment context (high-, middle-, or low-income country).
Farm scale orientation: Smallholder-specific (<2 ha), small-to-medium (2–10 ha), large-scale (>10 ha), or scale-agnostic.
Barrier and Opportunity Variables (RQ4)
Reported adoption barriers, coded into categories: infrastructure, cost, digital literacy, interoperability, regulatory, data governance/security, market integration, and stakeholder readiness.
Documented opportunities: efficiency gains, traceability enhancement, market access, income stabilization, environmental monitoring, and sustainability benefits.
Economic evaluation: presence of implementation cost reporting, cost–benefit analysis, or return-on-investment (ROI) assessment (coded yes/no).
Extraction Reliability and Reviewer Cross-Checking
Data extraction was conducted by two independent reviewers for 20% of the sample (n = 20) to assess consistency and reduce subjective interpretation. Inter-rater reliability reached 93.2% concordance across the coded variables. Discrepancies were discussed and resolved through consensus, with the coding rules refined accordingly. The remaining articles were coded by a primary reviewer under periodic secondary verification to maintain consistency and prevent drift.
Synthesis of extracted data followed a sequential explanatory integration approach, where descriptive statistics (Phase 1) informed maturity and performance comparisons (Phase 2), which were then interpreted through thematic synthesis of barrier and socioeconomic discussions (Phase 3).

2.8. Analytical Framework

Data analysis followed a structured, multi-phase analytical framework designed to integrate bibliometric trends, performance evaluation, and contextual interpretation.
Phase 1: Descriptive Bibliometric and Trend Analysis (RQ1)
Descriptive statistics were computed to identify publication patterns, research concentrations, and geographic distributions. Data preprocessing and tabulation were carried out in Microsoft Excel 365, while visualizations were generated in Python 3.10 using pandas, matplotlib, and seaborn libraries.
The analytical steps included the following:
  • Frequency distributions and annual publication trends (2019–2024).
  • Regional research mapping using author institutional affiliations.
  • Technology prevalence analysis by category and application domain.
  • Citation impact evaluation using Google Scholar citation data.
No inferential statistical tests were applied in this phase, as the purpose was descriptive characterization rather than hypothesis testing.
Phase 2: Performance and Maturity Assessment (RQ2, RQ3)
Quantitative performance results were synthesized using structured comparison rather than meta-analysis due to dataset heterogeneity, varied evaluation metrics, and inconsistent reporting standards across studies.
  • Performance metrics (e.g., accuracy, precision, F1-score, R2, RMSE, latency) were aggregated using range, mean when available, and frequency of metric reporting.
  • To contextualize the NASA Technology Readiness Level (TRL) scale for agricultural market systems, we adopted a domain-specific adaptation reflecting the unique characteristics of agricultural deployment environments. While TRL 1–3 (conceptual and laboratory validation) remains unchanged, TRL 4–6 was operationalized to incorporate field pilot feasibility under environmental variability, supply chain infrastructure, and farmer adoption conditions. Specifically, TRL 4 corresponds to prototype validation in controlled agricultural settings (e.g., research farms); TRL 5 represents pilot implementation with real users under limited scale (<50 farmers or <6 months of use); and TRL 6 requires operational deployment in heterogeneous farming contexts with demonstrable continuity (>50 users and >6 months operation). TRL 7–9 was defined by increasing degrees of scalability, integration with market actors, and sustained multi-season adoption. This adaptation ensures that maturity assessments account not only for technical performance but also for real-world agricultural constraints such as infrastructure availability, climate variability, digital literacy, and value chain actor coordination.
  • Implementation maturity was categorized into Prototype, Pilot, Operational Deployment, and Sustained Deployment based on deployment duration, user scale, and real-world usage evidence.
  • Cross-tabulations were performed to evaluate relationships between technology type, performance reporting rigor, and maturity stage, highlighting the research-to-practice gap.
Phase 3: Thematic and Critical Qualitative Synthesis (RQ4)
Qualitative analysis focused on interpreting contextual constraints, adoption barriers, and enabling conditions.
  • Thematic coding was conducted using a hybrid inductive–deductive approach, where deductive codes were derived from the research questions and inductive codes emerged during iterative reading.
  • Coding was performed manually using tag-matrix structures in Excel, rather than automated qualitative analysis software, due to the structured nature of reported statements.
  • Barrier and opportunity statements were categorized into frequency groups (infrastructure, cost, literacy, interoperability, regulatory, data security, organizational readiness).
  • Equity implications were assessed through content analysis of discussions referencing smallholder farmers, accessibility, affordability, and digital divide challenges.
  • A comparative matrix plotted TRL level vs. presence/absence of economic validation, operationalizing the “efficiency paradox”: high technical performance coupled with limited deployment viability.
  • Thematic coding of limitations/discussions to identify barriers/opportunities; barrier frequency; assessment of equity and accessibility. Relationships among TRL, validation type, and economic evidence were explored with Pearson correlations and ANOVA. All plots were generated in A (matplotlib); no formal meta-analysis was attempted due to metric heterogeneity.
To ensure comparability across heterogeneous studies and technology categories, composite scores for Technical Performance and Economic Viability were calculated using normalized multi-indicator indices, following the procedure detailed in Appendix A. Technical Performance Scores (0–100) were derived from reported accuracy, reliability, latency, throughput, and functional completeness metrics. Each indicator was first rescaled to a 0–1 interval using min–max normalization and then averaged with equal weighting due to the absence of standardized weighting criteria in the literature. Economic Viability Scores (0–100) were calculated using implementation cost, infrastructure accessibility, presence of documented cost–benefit analysis and reported return-on-investment (ROI) evidence. Indicators were similarly normalized and averaged to compute a composite viability index. The scores were then plotted as median values per technology category, with bubble size representing the number of studies contributing to each estimate. This scoring procedure does not imply economic profitability or deployment readiness but provides a comparative framework to visualize maturity gaps and the performance–viability misalignment across technologies.

2.9. Limitations

  • Publication bias: Systematic reviews reflect the published literature disproportionately reporting positive results. Failed implementations receive minimal documentation, potentially generating optimistic effectiveness assessments.
  • Database restrictions: Limiting searches to Scopus, WoS, IEEE Xplore excludes gray literature and non-English publications, introducing potential geographic bias.
  • Temporal window: The 2019–2025 timeframe prioritized recent developments but excludes earlier foundational work.
  • Metric heterogeneity: Diverse performance metrics, evaluation contexts, and reporting standards precluded formal meta-analysis. Synthesis remained qualitative.
  • Validation data scarcity: Only 31% of studies reported temporal/geographic generalization testing, limiting robust assessment of algorithm transferability.

3. Results

3.1. Descriptive Overview of Selected Studies

This section presents temporal patterns, geographic distribution, and publication quality of the 99 included articles to contextualize subsequent technical findings.

3.1.1. Temporal Distribution and Publication Trends

Academic interest in digital agricultural marketing technologies intensified exponentially from 2019 to 2024. Publications surged from two articles in 2019 to a peak of 40 in 2023, representing 2000% growth over five years (Figure 2).
Bar chart showing annual publication frequency: 2019 (n = 2, 2.0%), 2020 (n = 3, 3.0%), 2021 (n = 9, 9.1%), 2022 (n = 12, 12.1%), 2023 (n = 40, 40.4%), 2024 (n = 27, 27.3% through September), and 2025 (n = 6, 6.1% early access). The polynomial trend line indicates exponential growth in 2019–2023, with a slight decline in 2024. The COVID-19 pandemic period (2020–2021) annotation shows research tripling. The peak output in 2023 represents an inflection point accounting for 40.4% of the total corpus.
Year-by-year distribution: 2019 (n = 2) foundational blockchain/IoT studies; 2020 (n = 3), pandemic influence emerging; 2021 (n = 9), hybrid recommendation systems and AI applications accelerating; 2022 (n = 12), continued expansion; 2023 (n = 40), peak output representing 40.4% of the corpus; 2024 (n = 27, through September), slight decline potentially indicating a consolidation phase or publication lag.
The pandemic period (2020–2021) catalyzed research tripling (3→9 publications), reflecting urgent responses to disrupted traditional marketing channels. The most dramatic growth occurred in 2022–2023 (with publications more than tripling), suggesting technological maturation enabling widespread application studies.

3.1.2. Geographic Distribution and Institutional Contributions

Geographic analysis by first author affiliation reveals pronounced regional concentration. Figure 3 and Table 1 present distribution patterns.
As shown in Figure 3, Publications by author country/region showing concentration patterns. Asia: 57 studies (57.6%); Europe: 17 (17.2%); Africa: 14 (14.1%); North America: 5 (5.1%); Latin America: 5 (5.1%); Oceania: 1 (1.0%). Leading contributors: India (28 studies, 28.3%), China (12, 12.1%), UK (5, 5.1%), USA (5, 5.1%). n = 99 studies across 35 countries.
Regional distribution:
Regional distribution shows Asia dominance (57.6%, n = 57), led by India (28 studies, 28.3%) and China (12, 12.1%). Europe contributes 17.2% (n = 17), with UK (five), Spain (three), and Italy (three) leading. Africa represents 14.1% (n = 14) across nine countries: South Africa (three), Morocco (two), Egypt (two), Somalia (two), and five countries with single publications. North America (5.1%, n = 5, USA only) and Latin America (5.1%, n = 5 across Peru, Brazil, Chile, Colombia) show equivalent contributions. Oceania contributes 1.0% (n = 1, Australia).
India’s 28.3% share constitutes the largest single-country concentration. Three countries (India, China, USA) account for 45.5% of total output. Africa’s 14.1% contribution across nine nations indicates distributed research activity. Europe’s 17.2% proportion is lower than typical in agricultural technology reviews.

3.1.3. Journal Quality and Citation Impact

The publications span 67 distinct journals and proceedings. Table 2 presents the 15 most frequent venues.
Among 50 articles with detailed journal metrics: Q1 journals (n = 27, 54%), Q2 (n = 11, 22%), Q3 (n = 7, 14%), Q4 (n = 3, 6%), conference proceedings (n = 2, 4%). Combined Q1-Q2 representation: 76% of categorized publications.
MDPI journals account for 29 articles (29.3%): Sustainability (12), IEEE Access (8), Sensors (6), Agricultural Systems (5), and Computers and Electronics in Agriculture (5). The top five journals contain 32 articles (32.3%). Open-access venues represent >34% of the corpuses. Impact factors range from 1.06 to 4.7 (median: 3.1).
Geographic concentration in publication venues: Swiss-based journals (predominantly MDPI) account for 29 articles (29.3%); UK-based journals contribute seven articles (7.1%); and USA-based, five articles (5.1%).

3.2. Research Focus and Thematic Analysis (RQ1)

Analysis of 99 article titles and abstracts identified seven thematic clusters addressing different dimensions of agricultural marketing optimization (Figure 4):
Figure 4A shows seven primary research themes: Recommendation Systems and Decision Support (n = 28, 28.3%), Blockchain and Supply Chain Traceability (n = 24, 24.2%), Machine Learning and Predictive Analytics (n = 15, 15.2%), IoT and Smart Agriculture (n = 12, 12.1%), Adoption Barriers and Implementation (n = 9, 9.1%), Systematic Reviews and Bibliometric (n = 6, 6.1%), E-commerce and Digital Marketing (n = 5, 5.1%). n = 99 studies.
Thematic Distribution:
Figure 4B shows Recommendation systems and intelligent decision support constitute the largest cluster (28 studies, 28.3%), encompassing crop selection recommendations, fertilizer/pesticide advisory systems, context-aware platforms, and weather-based prediction models. Blockchain and supply chain traceability represent the second largest focus (24 studies, 24.2%), addressing food safety authentication, IoT integration for real-time monitoring, smart contracts, and fraud prevention across grain, oil, and multi-product supply chains.
Machine learning and predictive analytics account for 15 studies (15.2%), focusing on yield prediction, weather-based forecasting, deep learning applications, and explainable AI. IoT and smart agriculture infrastructure represent 12 studies (12.1%), addressing sensor networks, cloud computing integration, LoRa connectivity, and perishable commodity tracking.
Smaller clusters include systematic reviews and bibliometric studies (six studies, 6.1%), adoption barriers and implementation challenges (nine studies, 9.1%), and e-commerce/digital marketing applications (five studies, 5.1%).
Cross-cutting themes: Sustainability focus appears in 12 articles (12.1%), food safety/quality in 8 articles (8.1%), smallholder/developing country contexts in 7 articles (7.1%), and deep learning integration in 11 articles (11.1%). Technology convergence, integrating multiple technologies (Blockchain + IoT; AI + Blockchain), appears in 18 studies (18.2%).

3.2.1. Methodological Approaches

Methodological analysis identified seven primary approaches with varying frequencies and quality characteristics (Figure 5).
Figure 5A shows the methodology distribution: Bibliometric analysis (n = 16, 16.2%, quality score 20.8/25), blockchain prototype development (n = 15, 15.2%, quality 15.4/25), machine learning/data processing (n = 12, 12.1%, quality 17.6/25), IoT/cloud computing (n = 12, 12.1%, quality 17.2/25), prediction/simulation models (n = 10, 10.1%, quality 18.9/25), qualitative case studies (n = 8, 8.1%, quality 21.3/25), and optimization algorithms (n = 6, 6.1%, quality 19.1/25). Quality scores derived from journal quartile rankings and impact factors on 25-point scale. n = 99 studies.
Figure 5B shows the Methodological distribution: Bibliometric analysis (n = 16, 16.2%) and blockchain prototype development (n = 15, 15.2%) represent most frequent approaches, followed by machine learning/data processing and IoT/cloud computing (each n = 12, 12.1%). Prediction/simulation models (n = 10, 10.1%), qualitative case studies (n = 8, 8.1%), and optimization algorithms (n = 6, 6.1%) constitute smaller proportions. Computational and technology-focused methodologies account for 73 studies (73.7%).
Figure 5C shows Quality assessment: Quality scores range from 15.4 to 21.3 (mean: 18.3/25). Qualitative case studies achieve the highest scores (21.3), followed by bibliometric analyses (20.8). Blockchain prototype development exhibits the lowest scores (15.4), while machine learning and IoT methodologies score moderately (17.6 and 17.2, respectively). Correlation between methodology frequency and quality: r = −0.06, indicating no meaningful association.

3.2.2. Supply Chain Stages Addressed

Analysis of supply chain coverage reveals concentration patterns across the agricultural value chain stages (Figure 6).
Distribution of studies across five value chain stages: end-to-end coverage (n = 36, 36.4%), commercialization/distribution (n = 35, 35.4%), pre-production planning (n = 18, 18.2%), post-harvest storage (n = 5, 5.1%), production (n = 3, 3.0%). Multiple stages possible per study. n = 99 studies.
Coverage patterns: end-to-end supply chain coverage was claimed by 36 studies (36.4%), while commercialization/distribution-specific focus appears in 35 studies (35.4%). Combined commercialization emphasis (end-to-end + distribution-only) represents 71.8% of the corpus. Pre-production planning was addressed by 18 studies (18.2%), contrasting with a limited production (n = 3, 3.0%) and post-harvest storage (n = 5, 5.1%) focus. Production and storage stages combined represent only 8.1% of research.
The supply chain stage distribution exposes a fundamental research imbalance: sophisticated commercialization optimization tools developed while neglecting production realities and infrastructural prerequisites determining deployment feasibility. This represents a conceptual misalignment—treating agricultural marketing primarily as an information/coordination problem solvable through digital platforms, while underweighting material constraints (production capacity, storage infrastructure, transportation networks) that digital solutions cannot address. Until the research bridges the production–marketing divide, the practical impact of agricultural marketing technologies will remain limited to contexts where upstream constraints are already resolved—precisely where optimization interventions are least needed.

3.3. Technology Landscape and Applications (RQ1, RQ2)

This section provides an in-depth examination of the specific technologies employed in agricultural marketing optimization, analyzing their applications, performance metrics, and implementation contexts. The analysis addresses RQ1 (technology prevalence and applications) and RQ2 (performance indicators and documented impacts).
Figure 7 shows the technology category distribution across 99 studies: Others/Hybrid (35%), AI/ML (17%), Recommendation Systems (16%), Data Visualization (11%), Blockchain (7%), Theoretical Modeling (9%), IoT (5%). AI/ML and Recommendation Systems combined represent 33% of technological focus. n = 99 studies; categories not mutually exclusive as studies may employ multiple technologies.
Technology distribution: Others/Hybrid approaches account for 35%, indicating diverse technological combinations not fitting standard classifications. AI/ML represents 17% of studies, Recommendation Systems, 16%, together constituting 33% of the corpus. Data Visualization (11%), Theoretical Modeling (9%), Blockchain (7%), and IoT (5%) represent smaller proportions.

3.3.1. Technology Readiness Level and Research Volume

Technology maturity assessment using the NASA TRL scale (1 = basic principles, 9 = operational deployment) reveals varied implementation readiness across categories.
In the Figure 8A: Scatter plot positioning technologies by research volume (bubble size, n = 5–32 studies) and Technology Readiness Level (y-axis, 1–9 scale). Big Data/Cloud: TRL 7.0, n = 10; IoT Sensors: TRL 6.0, n = 12; AI/ML: TRL 5.0, n = 32; Recommendation Systems: TRL 5.0, n = 28; Blockchain: TRL 4.0, n = 15; Augmented Reality: TRL 3.0, n = 5; Digital Twins: TRL 3.5, n = 2. Figure 8B: Bar chart showing percentage of studies per technology including economic validation (cost–benefit analysis, ROI calculation, or farmer affordability assessment). Big Data/Cloud: 20%; IoT: 17%; Blockchain: 7%; Recommendation Systems: 7%; AI/ML: 6%; AR/Digital Twins: 0%. Corpus means: 9.1% economic validation coverage. n = 99 studies.
Maturity distribution: Big Data/Cloud platforms achieve highest maturity (TRL 7.0, n = 10 studies), followed by IoT Sensors (TRL 6.0, n = 12). AI/ML (TRL 5.0, n = 32) and Recommendation Systems (TRL 5.0, n = 28) represent mid-range maturity with the highest research volume. Blockchain (TRL 4.0, n = 15), Augmented Reality (TRL 3.0, n = 5), and Digital Twins (TRL 3.5, n = 2) exhibit lower maturity levels.
Economic validation: The percentage of studies including cost–benefit analysis, ROI calculation, or farmer affordability assessment varies by technology: Big Data/Cloud (20%), IoT (17%), Blockchain (7%), Recommendation Systems (7%), AI/ML (6%), AR/Digital Twins (0%). Overall corpus: nine studies (9.1%) include economic validation.

3.3.2. Technology Viability Assessment

Figure 9 positions technologies according to Technical Performance Scores (based on reported metrics, TRL, and deployment evidence) and Economic Feasibility Scores (based on validation presence and cost data).
The label “High Technical Performance but Limited Viability” in the upper-left quadrant refers to technologies that demonstrate strong technical performance (e.g., high accuracy, reliability, or functional completeness) but show low economic feasibility due to high implementation costs, limited infrastructure availability, or insufficient evidence of cost-effectiveness. In practical terms, these systems perform well in controlled or pilot environments; however, their scalability and sustained adoption in real agricultural markets remain constrained.
The scatter plot positions six technology categories according to technical performance (x-axis: 0–100, derived from reported accuracy, reliability, Technology Readiness Level [TRL], and deployment readiness) and economic feasibility (y-axis: 0–100, based on the presence of economic validation, documented implementation costs, and evidence of return on investment). Bubble size represents research volume. The four quadrants are defined as follows: Ideal Zone (high technical performance, high economic feasibility), Efficiency Gap (high technical performance, low economic feasibility), Limited Viability (low technical performance, high economic feasibility), and Problematic (low technical performance, low economic feasibility).
Technology positioning is as follows: Big Data/Cloud (90, 70; n = 10), IoT Sensors (95, 55; n = 12), Recommendation Systems (90, 40; n = 28), AI/ML (85, 35; n = 32), Blockchain (30, 25; n = 15), Augmented Reality (AR) (80, 10; n = 5), and Digital Twins (85, 15; n = 2). Most technologies cluster in the Efficiency Gap quadrant, characterized by high technical performance (80–95) but low economic feasibility (10–55). In total, n = 99 studies were analyzed.
Technology-Specific Profiles
Big Data/Cloud (TRL 7.0, n = 10): Technical performance scores 90/100, with economic feasibility at 70/100. Reported processing time reductions range from 70% to 85%, while cost reductions average approximately 15% compared to traditional approaches. Economic validation is present in 20% of studies, documenting scalability and the effectiveness of pay-as-you-go pricing models.
IoT Sensors (TRL 6.0, n = 12): Technical performance reaches 95/100, with economic feasibility at 55/100. Reported transmission success rates exceed 95%, battery life ranges from 2 to 5 years, and latency remains below 1 second. Economic validation is reported in 17% of studies, primarily documenting spoilage reductions of 10–20% in cold-chain monitoring. Implementation costs range from USD 5,000 to USD 15,000 per vehicle.
Recommendation Systems (TRL 5.0, n = 28): Technical performance is rated at 90/100, while economic feasibility is 40/100. Accuracy ranges from 80% to 92%, with F1-scores between 0.78 and 0.89. Economic validation is limited to 7% of studies. Field trials report farmer acceptance rates between 64% and 72%.
AI/ML (TRL 5.0, n = 32): Technical performance is assessed at 85/100, with economic feasibility at 35/100. Crop prediction accuracy ranges from 85% to 95%, while yield prediction performance shows R2 values between 0.82 and 0.94 and RMSE values between 0.12 and 0.35 tons per hectare. Economic validation appears in only 6% of studies. Validation approaches include k-fold cross-validation (69%) and temporal or geographic testing (31%).
Blockchain (TRL 4.0, n = 15): Blockchain technologies score 30/100 in technical performance and 25/100 in economic feasibility. Fraud detection accuracy ranges from 87% to 96%, transaction latency remains below 2 s, and throughput varies between 180 and 850 transactions per second. All implementations remain at the prototype or pilot stage, with no sustained operational deployments exceeding 12 months. Initial implementation costs range from USD 50,000 to USD 200,000, with ongoing monthly costs between USD 800 and USD 3000. Economic validation is reported in 7% of studies.
Emerging Technologies (AR and Digital Twins):AR (TRL 3.0, n = 5) and Digital Twins (TRL 3.5, n = 2) demonstrate technical performance levels between 80 and 85/100 but very low economic feasibility (10–15/100). AR-based training systems report training time reductions of approximately 20%. No studies provide formal economic validation. Set-up costs range from USD 25,000 to USD 100,000.
Quadrant Distribution
No technologies fall within the Ideal Zone (high technical performance and high economic feasibility). The Efficiency Gap accounts for 57% of studies, including AI/ML, Recommendation Systems, AR, and Digital Twins. The Limited Viability quadrant represents 14% of studies, primarily Big Data/Cloud technologies approaching the ideal threshold. The Problematic quadrant comprises 29% of studies, dominated by Blockchain solutions and, to a lesser extent, IoT implementations.

3.4. Implementation Maturity and Technology Readiness (RQ3)

This section assesses implementation maturity using Technology Readiness Level (TRL) frameworks, evaluating progression from conceptual validation toward operational deployment.

3.4.1. Technology Readiness Level Distribution: Implementation Maturity Assessment

TRL assessment employed NASA’s nine-level framework where 1–3 indicates basic research, 4–5 represents technology validation, 6–7 denotes system demonstration, and 8–9 signifies operational deployment.
Figure 10 shows the technology Readiness Levels (TRL) assessed across seven digital technology categories reveal substantial maturity heterogeneity. TRL assessment employed NASA’s nine-level framework. Where 1–3 indicates basic research, 4–5 represents technology validation, 6–7 denotes system demonstration, and 8–9 signifies operational deployment. Error bars represent TRL range variability across studies within each technology category.
Bar chart with error bars showing mean TRL scores across seven technology categories. Big Data/Cloud: TRL 7.0 ± 0.8 (n = 10); IoT Sensors: TRL 6.0 ± 1.1 (n = 12); AI/ML: TRL 5.0 ± 0.9 (n = 32); Recommendation Systems: TRL 5.0 ± 0.7 (n = 28); Blockchain: TRL 4.0 ± 0.6 (n = 15); Augmented Reality: TRL 3.0 ± 0.5 (n = 5); Digital Twins: TRL 3.5 ± 0.4 (n = 2). Error bars represent TRL range variability within each category. n = 99 studies. Only Big Data/Cloud and IoT achieve deployment-phase maturity (TRL ≥ 6); the remaining 77 studies (77.8%) report TRL ≤ 5.
TRL distribution: Big Data/Cloud platforms achieve highest maturity (TRL 7.0, n = 10), followed by IoT Sensors (TRL 6.0, n = 12), together representing 22.2% of studies at deployment-phase readiness. AI/ML (TRL 5.0, n = 32) and Recommendation Systems (TRL 5.0, n = 28) occupy mid-range validation levels. Blockchain (TRL 4.0, n = 15), Augmented Reality (TRL 3.0, n = 5), and Digital Twins (TRL 3.5, n = 2) exhibit lower maturity. Technologies at TRL ≤ 5 account for 77.8% of the corpus.

3.4.2. Implementation of Stage Distribution

Implementation stages are classified as follows: Prototype (laboratory/controlled testing), Pilot (limited deployment, <50 users, <6 months), Operational (sustained deployment, >50 users, 6–12 months), and Sustained (continuous operation >12 months with farmer self-maintenance).
Figure 11 presents the distribution of implementation stages based on the study methodology classification, revealing a strong concentration in the prototype phase and very limited progression toward operational deployment. The implementation stages are defined as follows: Prototype (laboratory or controlled-environment testing), Pilot (limited real-world deployment involving fewer than 50 users and lasting less than six months), Operational (sustained deployment with more than 50 users for six to twelve months), and Sustained (continuous operation exceeding 12 months with farmer self-maintenance).
The stacked bar chart illustrates the percentage distribution of implementation stages across seven technology categories (n = 99 studies). Blockchain, Augmented Reality, and Digital Twins exhibit a 100% concentration at the prototype stage, with no progression to pilot, operational, or sustained phases. AI/ML technologies show 75% prototype and 25% pilot implementation, with no sustained deployments. Recommendation Systems present a similar pattern, with 68% prototype and 32% pilot, and no sustained operation. IoT Sensors demonstrate slightly higher maturity, with 67% prototype, 25% pilot, and 8% sustained deployment. Big Data/Cloud technologies show the most advanced progression, with 60% prototype, 30% pilot, and 10% sustained deployment.
Overall, only three studies (3.0% of the corpus) achieve sustained operational deployment, highlighting the persistent gap between experimental validation and long-term real-world adoption across the analyzed technologies.

3.4.3. Implementation Barrier Severity

Barrier severity assessed through systematic analysis of reported limitations and challenges, scored 0–5 (0 = minimal, 5 = deployment-preventing). Six barrier categories identified: economic validation absence, cost barriers, infrastructure requirements, technical capacity gaps, data quality at origin, and adoption coordination problems.
Figure 12 shows a heatmap of barrier severity scores on a 0–5 scale across six barrier types and seven technologies (n = 99). The absence of economic validation is the most severe barrier, with the highest mean score (4.57/5.0). It reaches critical levels (5.0) for Augmented Reality, Digital Twins, and Blockchain, followed by AI/ML (4.5), Recommendation Systems (4.0), IoT Sensors (3.5), and Big Data/Cloud (3.0).
Cost barriers are the second most critical constraint (mean 4.14), with maximum severity for Blockchain and AR (5.0), high severity for IoT Sensors (4.5), and moderate-to-high severity for Digital Twins (4.0). Infrastructure requirements also represent a significant limitation (mean 3.86), particularly for Blockchain and AR (5.0) and for Digital Twins and IoT Sensors (4.0).
Other barriers show moderate but persistent effects. Technical capacity gaps and data quality issues both present mean severity scores of 3.71, while adoption and coordination challenges show a slightly lower mean (3.43), remaining critical for Blockchain (5.0). The heatmap color scale ranges from green (<2.5) to red (4.5–5.0).
Overall, Figure 12 highlights that insufficient economic validation and high implementation costs are the dominant barriers limiting the transition of digital agricultural technologies from pilots to sustained deployment.

3.4.4. Technology Maturity Evolution Timeline (2019–2025): TRL Progression Trajectories

Temporal TRL progression reconstructed from publication dates and reported implementation stages, showing field-level maturation trajectories.
Figure 13 shows the timelines for TRL progression reconstructed from the publication dates and implementation stages reported in 99 studies. The TRL for each technology is estimated annually using the weighted average of the studies published that year, providing a longitudinal perspective on maturation dynamics. The trajectories represent a field-level consensus on technology’s readiness, rather than the progression of each system.
The timeline exposes maturation stagnation contradicting expected innovation trajectories. Three technologies—Blockchain, Augmented Reality, and Digital Twins—plateau at TRL 3–4 since 2022–2023, indicating no substantive maturation advancement despite continued research investment. Blockchain particularly illustrates this pattern: TRL progression from 2.5 (2019) to 4.0 (2022) followed by 3-year stagnation at TRL 4.0 (2022–2025) despite 15 studies and intensive European research funding. This suggests fundamental barriers beyond incremental technical refinement.
AI/ML and Recommendation Systems demonstrate modest progression to TRL 5.0 by 2023, then plateau, indicating validation in relevant environments is achieved but operational demonstration is elusive. The 2-year stagnation (2023–2025), despite being the most-studied technologies (32 and 28 studies, respectively) reveals maturation limits. These technologies face different constraints than early-stage systems: technical capability exists, but adoption barriers (farmer capital constraints, risk aversion, explainability requirements) prevent operational deployment. Continued algorithm optimization research yields diminishing returns without addressing these non-technical constraints.
Only Big Data/Cloud demonstrates sustained progression from TRL 5.5 (2019) to 7.0 (2023), achieving deployment-phase maturity. However, even this trajectory plateaus post-2023, suggesting that, while infrastructure-as-a-service models enable operational deployment, limitations remain (subscription costs, connectivity requirements, data sovereignty concerns) preventing progression to widespread adoption. IoT sensors follow similar patterns: steady advancement to TRL 6.0 (2023), then stagnation, indicating demonstration in operational environments is achieved but sustained, self-maintained deployment is uncommon.
No technology exhibits acceleration in maturation rates over time—all show linear or decelerating progression. This contradicts technology, lifecycle theory predicts that maturation accelerates as knowledge accumulates and best practices emerge. The universal deceleration or stagnation post-2022 suggests systematic field-level barriers rather than technology-specific technical challenges. These may include (1) research funding cycles incentivizing novel prototypes over sustained deployment validation, (2) publication venue preferences for technical innovation over implementation science, and (3) fundamental misalignment between technology capabilities and farmer contexts.
The timeline reveals a field producing technically advancing prototypes that fail to progress to operational viability. The absence of any technology trajectory showing TRL progression >1 level in the past two years (2023–2025) despite 40 publications in 2023 alone indicates that research volume does not translate to implementation readiness. This suggests the field requires strategic reorientation from prototype proliferation toward implementation science—studying adoption processes, economic viability, and sustained operation conditions rather than algorithmic refinement.

3.4.5. Multi-Dimensional Deployment Readiness

Readiness assessment across six dimensions (0–10 scales): Technical Maturity, Economic Viability, Infrastructure Availability, User Readiness, Regulatory Compliance, Scalability Potential.
Figure 14 (seven subplots) showing six-dimensional readiness profiles. Big Data/Cloud (mean 7.17): Technical, 8.0; Infrastructure, 8.0; Scalability, 8.0; Regulatory, 7.0; Economic, 7.0; User, 6.0. IoT (mean 6.67): Technical, 7.0; Regulatory, 7.0; Infrastructure/Economic/User, 6.0; Scalability, 6.5. AI/ML (mean 5.83): Technical, 7.0; Scalability, 7.0; Regulatory, 6.0; User, 5.0; Economic, 4.0; Infrastructure, 6.0. Recommendations (mean 5.67): Technical, 7.0; User/Regulatory, 6.0; Economic, 4.0; Infrastructure, 6.0; Scalability, 5.0. Blockchain (mean 4.00): all dimensions 3.0–5.0. AR/Digital Twins (mean 3.33–3.50): most dimensions 2.0–4.0. n = 99 studies.
Readiness profiles: Big Data/Cloud achieves the highest mean score (7.17/10) with strengths in technical maturity (8.0), infrastructure (8.0), and scalability (8.0). IoT scores 6.67 mean with technical maturity (7.0) and regulatory compliance (7.0) strengths. AI/ML (5.83 mean) and Recommendation Systems (5.67 mean) show strong technical maturity (7.0) but weak economic viability (4.0). Blockchain averages 4.00 across dimensions. AR and Digital Twins exhibit lowest readiness (3.33–3.50 mean) with Economic Viability Scores of 2.0.
Cross-figure analysis reveals three implementation patterns. First, most technologies (77.8%) achieve mid-range TRL (3–5) but do not progress to operational deployment (TRL 6–9). Only three studies (3.0%) document sustained operation beyond 12 months. Second, technologies face multiple concurrent barriers: economic validation absence (mean severity 4.57/5.0), cost constraints (4.14/5.0), and infrastructure requirements (3.86/5.0) occur simultaneously rather than in isolation. Third, TRL trajectories show maturation plateaus post-2022/2023 across all technologies, with no advancement > 1 TRL level during 2023–2024 despite continued research volume (67 publications in this period).
The maturity distribution indicates concentration in prototype/validation stages (TRL 3–5, 77.8% of studies) with limited progression to deployment phases (TRL 6–7, 22.2%) and minimal sustained operational evidence (3.0%). Barrier severity patterns show technology-specific vulnerabilities: Blockchain encounters critical economic and infrastructure constraints (5.0/5.0 both), AI/ML faces economic validation gaps (4.5/5.0), while even mature technologies (Cloud, IoT) exhibit economic feasibility limitations (3.0–3.5/5.0).

3.5. Evaluation of Methodology and Performance Metrics (RQ2)

This section examines evaluation approaches employed across reviewed studies, analyzing metrics used to assess technology performance and implementation effectiveness.

3.5.1. Evaluation Metric Distribution

Of the 99 reviewed studies, 41 (41.4%) reported explicit quantitative or qualitative evaluation metrics. Table 3 categorizes metrics across five methodological dimensions.
Accuracy and performance metrics dominate the evaluation landscape, employing standard machine learning indicators: classification accuracy (reported range 85–99.53%), precision, recall, F1-scores, and Area Under Curve (AUC). Twelve studies report these metrics, predominantly for crop recommendation systems (n = 14), yield prediction models (n = 8), and quality classification applications (n = 3). The reported accuracies cluster in the 85–95% range for agricultural prediction tasks, with exceptional cases achieving 99%+ accuracy on specific classification problems [34,42,44].
Time and efficiency metrics address computational performance and operational improvements: transaction latency (0.8–4.2 s for blockchain systems), throughput (180–850 transactions/s), search time reductions (7.5× faster than traditional methods), and carbon emission reductions in logistics optimization [33,45,46,47]. These metrics demonstrate technical capability improvements but do not establish whether efficiency gains translate to the economic benefits of farmers or adoption advantages.
Predictive modeling evaluations employ regression-oriented statistical metrics: coefficient of determination (R2: 0.82–0.94), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and correlation coefficients [48,49,50,51,52,53,54,55,56]. These metrics assess model fitness and prediction accuracy on test datasets, typically evaluating yield forecasting, price prediction, and input optimization models [57,58,59,60,61,62,63,64,65,66].
Figure 15A,B display evaluation of metric distribution across 41 studies (41.4% of the corpus) reporting explicit performance assessment. Categories: Accuracy and Performance (n = 12, 29.3%), Time and Efficiency (n = 10, 24.4%), Predictive Modelling and Statistical Analysis (n = 9, 22.0%), Security and Access Control (n = 5, 12.2%), Qualitative and Bibliometric Analysis (n = 5, 12.2%). Figure 15A shows absolute frequencies; Figure 15B displays proportional distribution. n = 41 studies with explicit metrics.

3.5.2. Comparative Technology Performance

Table 4 synthesizes reported performance metrics across six technology categories, documenting study counts, primary indicators, performance ranges, and applications.
Performance by technology: Recommendation Systems (n = 28, 28.3%) report accuracy 80–92%, F1-scores 0.78–0.89, and precision/recall > 80% for hybrid agricultural advisory systems combining content-based and collaborative filtering.
AI/ML approaches (n = 20, 20.2%) achieve accuracies 85–95% using ensemble methods (Random Forest, Gradient Boosting, XGBoost), deep learning (CNN, LSTM, RNN), and traditional algorithms (SVM, Decision Trees). Regression models report R2 0.82–0.94, RMSE 0.12–0.35 tons/hectare for yield prediction.
Blockchain implementations (n = 15, 15.2%) demonstrate fraud reduction 20–30% in digitalized supply chains, transaction latencies < 2 s for IBM Food Trust, Hyperledger Fabric, and Ethereum-based platforms. Verification time reductions: 2–3 days to minutes. All implementations remain prototype/pilot stage.
IoT sensor networks (n = 12, 12.1%) achieve transmission success > 95%, latency < 1 s using LoRaWAN, NB-IoT, WiFi protocols. Battery life: 2–5 years for low-power protocols. Applications: field monitoring, cold chain tracking, UAV surveillance.
Big Data/Cloud platforms (n = 10, 10.1%) report processing time improvements and 15% cost reductions versus traditional approaches using Hadoop, MapReduce, NoSQL databases.
AR/3D applications (n = 5, 5.1%) report 20% operator training time reductions for equipment visualization using AR.js, Three.js, and OpenCV frameworks.
Figure 16A: showing article counts by technology (AR/3D: n = 5; Big Data/Cloud: n = 10; IoT: n = 12; Blockchain: n = 15; AI/ML: n = 20; Recommendation Systems: n = 28). Figure 16B: Performance range visualization for primary metrics. Reported ranges (AI/ML accuracy, 85–95%; Blockchain fraud reduction, 20–30%; IoT transmission > 95%) represent heterogeneity across studies using different datasets, algorithms, and validation approaches. n = 90 articles.

3.5.3. Key Performance Indicators

Four performance indicators appear frequently across the studies:
Crop prediction accuracy: 85–95% using Random Forest, Gradient Boosting, and CNN architectures. Validation approaches: 69% employ k-fold cross-validation, 31% temporal/geographic testing.
Fraud reduction via Blockchain: up to 30% in digitalized supply chains. All the cases represent simulation/prototype testing rather than sustained operational systems.
Traceability time reduction: 2–3 days to minutes for batch verification in IBM Food Trust, Hyperledger platforms.
Recommendation precision/recall: >80% in hybrid agricultural advisory systems.
Figure 17 visualizes four frequently cited metrics. Panel 1: Crop prediction accuracy: 85–95% using deep/ensemble learning. Panel 2: Potential to reduce fraud by up to 30% using blockchain. Panel 3: Traceability time reduction >99% (from 2–3 days to minutes). Panel 4: Recommendation system metrics (accuracy, precision, recall, F1 score) >80%. All metrics represent technical performance under controlled/pilot conditions. n = 99 studies.

3.5.4. Evaluation Gaps

Three methodological limitations are consistently identified. First, economic validation is largely absent: only nine studies (9.1%) include cost–benefit, ROI, or affordability analyses, while 90.9% assess technical performance without evaluating economic feasibility. This limitation is particularly evident in high-cost technologies such as Blockchain (USD 50,000–200,000) and IoT cold-chain systems (USD 5000–15,000 per vehicle).
Second, user acceptance is rarely assessed. No studies apply formal adoption models (e.g., TAM or UTAUT), and only five studies (5.1%) report farmer participation, with acceptance rates ranging from 64% to 72%.
Third, temporal and geographic validation is limited, as most studies rely on single, cross-sectional datasets without testing performance stability over time, across regions, or under conditions of missing data or sensor failures.
Figure 18 positioning technologies by Technical Performance Scores (x-axis: 0–100, derived from reported accuracy, reliability, and efficiency) and economic validation coverage (y-axis: percentage of studies with cost–benefit analysis, ROI assessment, or farmer affordability evaluation). Bubble size proportional to research volume (n = 5–28). Technologies: Big Data/Cloud (90, 20), IoT (95, 17), Recommendation Systems (90, 7), AI/ML (85, 6), Blockchain (30, 7), and AR/3D (80–85, 0). All the technologies show high technical performance (75–97) but low economic validation (0–20%). n = 99 studies.
The corpus demonstrates substantial technical capabilities: classification accuracies, 85–95%; efficiency improvements, >99%-time reductions; operational reliability, >95% transmission success. These metrics establish technologies’ function as designed under test conditions.
Economic validation (9.1% of studies), user acceptance assessment (5.1%), and deployment sustainability tracking (minimal) remain systematically limited. Performance metric distribution shows concentration in accuracy/performance (29.3%) and efficiency measures (24.4%), with minimal economic impact assessment. The technical performance-economic validation gap appears across all technology categories, with even the highest-performing technologies (Big Data/Cloud, IoT) showing economic validation in only 17–20% of studies.

3.6. Implementation Barriers and Adoption Opportunities (RQ4)

Systematic content analysis of limitation sections, discussion paragraphs, and explicit barrier assessments across 99 studies identified recurring implementation constraints and frequently cited adoption benefits. This section presents the barriers and opportunity distributions, technology-specific patterns, and relationships between constraints and potential benefits.

3.6.1. Barrier Frequency and Technology-Specific Severity

The analysis identified six primary barriers mentioned across 25+ studies. Lack of technological infrastructure (n = 24, 24.2%) was most frequently cited, followed by limited awareness of technology benefits (n = 20, 20.2%), system interoperability gaps (n = 16, 16.2%), data security/privacy concerns (n = 16, 16.2%), blockchain latency issues (n = 12, 12.1%), and low supply chain partner readiness (n = 12, 12.1%). Additional barriers included high implementation costs (n = 11, 11.1%), lack of technical expertise (n = 9, 9.1%), and cultural resistance (n = 5, 5.1%).
Barrier severity varies by technology category. Infrastructure constraints affect all technologies but exhibit highest severity for Blockchain (requires continuous connectivity for distributed consensus), IoT sensors (need reliable power despite battery capabilities), and cloud platforms (demand stable high-bandwidth connections). Awareness barriers show technology-dependent patterns, with farmer-facing systems (AR/3D, Recommendation Systems) encountering adoption resistance attributed to limited benefit understanding, while backend technologies (AI/ML, Big Data/Cloud) face moderate awareness challenges primarily among system integrators.
Heatmap matrix displaying barrier severity across six technology categories (Blockchain, AI/ML, IoT, Recommendation Systems, Big Data/Cloud, AR/3D) and six constraint dimensions (infrastructure, awareness, interoperability, security, latency, partner readiness). Severity scored 1–5 based on reported implementation challenges (1 = minimal; 5 = deployment-preventing). Infrastructure shows universally high severity (mean = 4.0). Awareness barriers exhibit technology-dependent patterns (AR/3D: 5; Recommendation Systems: 5 vs. Big Data/Cloud: 3). Interoperability and security concentrate in distributed technologies (Blockchain: 5; IoT: 4). n = 99 studies.
Figure 19 reveals differential severity patterns. Infrastructure barriers scored 4–5 across all categories, consistent with geographic distribution (57.6% research from Asia, 14.1% from Africa—regions with limited rural connectivity). Interoperability limitations (severity 2–5) disproportionately affect networked systems requiring multi-stakeholder coordination, particularly blockchain traceability platforms generating value only with supply chain-wide participation.

3.6.2. Opportunity Citation Frequency

Six opportunities were cited across 11+ studies. Improved product traceability led the mentions (n = 22, 22.2%), followed by real-time decision optimization (n = 19, 19.2%), AI-based personalized recommendations (n = 17, 17.2%), increased supply chain transparency (n = 17, 17.2%), post-harvest loss reduction (n = 14, 14.1%), and automation of logistics/farming processes (n = 11, 11.1%).
Opportunity mentions typically appeared in study abstracts, introductions, and discussion sections as motivations for technology development or potential benefits. Empirical validation of claimed opportunities varied substantially: post-harvest loss reduction included quantitative evidence in six studies (43% of mentions) documenting 10–20% spoilage reduction, while traceability improvements included economic validation (consumer willingness to pay, price premiums) in only one study (4.5% of mentions).
Network diagram connecting six barriers (left: infrastructure, 24%; awareness, 20%; interoperability, 16%; security, 16%; latency, 12%; partner readiness, 12%) to six opportunities (right: traceability, 22%; real-time optimization, 19%; AI recommendations, 17%; transparency, 17%; loss reduction, 14%; automation, 11%). Lines indicate which barriers prevent specific opportunities: infrastructure blocks traceability, optimization, and transparency; awareness impedes AI recommendations; interoperability prevents traceability and transparency. Percentages = proportion of 99 studies. The categories are not mutually exclusive.
Figure 20 maps relationships between constraints and potential benefits. The most frequently cited opportunities (traceability, 22%; real-time optimization, 19%; transparency, 17%) connect to highest severity barriers (infrastructure, 24%; awareness, 20%), indicating alignment challenges between research priorities and implementation prerequisites.

3.6.3. Barrier Reporting Patterns

Comparison of barrier mention frequency against deployment impact (estimated from field trial outcomes and adoption data where available) reveals reporting variation. Infrastructure deficits (24% mention high deployment impact) and economic viability concerns (11% mention high deployment impact based on cost data) show different reporting-severity relationships.
Figure 21 shows a scatter plot positioning deployment barrier by reporting frequency (x-axis: % of studies) and estimated deployment severity (y-axis: 0–100, where 100 indicates deployment prevention), based on n = 99 studies. Severity estimates are derived from deployment outcomes and field trial evidence. The quadrants represent underreported critical barriers (upper-left), appropriately emphasized barriers (upper-right), frequently mentioned but moderate barriers (lower-right), and minor barriers (lower-left). Bubble size reflects reporting frequency.
Infrastructure barriers appear in the upper-right quadrant (24% of studies; severity 95/100), while economic viability is positioned in the upper-left quadrant, indicating high severity (90/100) but low reporting frequency (11%). Economic viability issues—such as cost–benefit balance and farmer affordability—are discussed in only 11 studies, despite high implementation costs (Blockchain: USD 50,000–200,000; IoT cold chain: USD 5000–15,000 per vehicle) and the widespread absence of economic validation (90.9% of studies). Technical expertise gaps are similarly underreported (9.1%), despite field evidence showing reliance on external technical support.

3.6.4. Opportunity Feasibility Distribution

Opportunities vary in implementation feasibility based on technical maturity (TRL levels, performance metrics) and economic validation (cost–benefit evidence, ROI documentation). Technical feasibility ranges 50–80/100 (scale based on TRL and deployment readiness economic feasibility ranges 20–55/100 (scale based on validation presence).
Figure 22 positioning opportunities by technical feasibility (x-axis: 0–100, based on TRL and performance) and economic feasibility (y-axis: 0–100, based on validation evidence). Bubble size = citation frequency. Quadrants: upper-right (viable), lower-right (high technical/low economic), upper-left (technical barriers), lower-left (not feasible). Most opportunities cluster lower-right: traceability (60, 25), real-time optimization (75, 35), AI recommendations (80, 40), transparency (65, 30), automation (50, 20). Post-harvest loss reduction approaches upper-right (70, 55) with quantitative validation in six studies. n = 99 studies.
Post-harvest loss reduction (technical, 70; economic, 55) demonstrates relatively balanced feasibility, distinguished by economic validation in 43% of mentions (6/14 studies) documenting spoilage reduction and cost savings. Other opportunities show higher technical than economic feasibility, traceability (60 vs. 25), real-time optimization (75 vs. 35), and AI recommendations (80 vs. 40), reflecting the pattern documented in Section 3.5 where technical performance metrics dominate evaluation while economic validation remains limited.

4. Discussion

Despite reported accuracy of 85–95% for crop predictions, 80–92% for recommendation systems, and 20–30% blockchain fraud reduction, only 3% of the 99 reviewed studies achieved sustained operational deployment exceeding 12 months. This efficiency paradox—technical excellence without deployment viability—reveals a fundamental misalignment between computational research priorities and agricultural implementation requirements.
Previous systematic reviews addressing digital innovation in agriculture have primarily focused on either single-technology perspectives or production-side applications, rather than commercialization processes. For example, different studies examine the adoption of AI and IoT in crop monitoring and smart farming but did not assess downstream market integration or economic viability at the farmer level [67,68]. Similarly, ref. [69] reviewed blockchain applications in agri-food traceability but did not assess technology readiness or sustained deployment factors. Only a limited number of reviews, such as [70] considered agribusiness or market access dimensions, yet these studies focused on conceptual frameworks rather than empirical implementation evidence. In contrast, the present review uniquely synthesizes algorithmic performance, implementation maturity (via TRL analysis), and economic feasibility across Blockchain, AI/ML, IoT, and recommendation systems within agricultural marketing contexts. This comparative positioning highlights a critical research gap: the existing literature often demonstrates technical capability but rarely evaluates operational continuity, adoption behavior, or long-term market impact. Therefore, the findings of this review extend the scope of prior syntheses by emphasizing the validation–deployment gap as a structural feature across technologies and geographic contexts.
Technology Readiness Level assessment expocses severe maturation gaps: 77.8% report technologies at validation stages (TRL ≤ 5), only 22.2% achieve deployment maturity (TRL ≥ 6), and 90.9% lack economic validation from farmer perspectives. This pattern contradicts innovation diffusion theory [70] predicting research investment accelerates maturation. Instead, the most-studied technologies (AI/ML n = 32, Recommendations n = 28) exhibit mid-range maturity (TRL 5.0), while less-studied technologies (Cloud n = 10, IoT n = 12) demonstrate superior deployment readiness, suggesting research gravitates toward algorithmically tractable problems amenable to computational publication rather than implementation-critical challenges.
Three systemic failures prevent technology translation. First, validation–deployment mismatch: research optimizes publication-ready metrics (test set accuracy) rather than deployment-critical dimensions (robustness to missing data, economic returns). While 69% of AI/ML studies employ k-fold cross-validation appropriate for algorithm comparison, only 31% conduct temporal/geographic generalization testing necessary for deployment readiness. Field trials (five studies, 5.1%) reveal farmers accept only 64–72% of recommendations, rejecting 28–36% due to capital constraints—a gap invisible in laboratory validation. Second, economic viability neglect: available cost data reveal prohibitive economics (Blockchain: USD 50,000–200,000; IoT: USD 5000–15,000) exceeding smallholder budgets (median income: USD 500–3000), yet 90.9% of studies omit cost–benefit analysis. Third, the infrastructure-equity paradox: severe infrastructure barriers (mean severity 3.86/5.0) disproportionately affect contexts where technologies could generate greatest impact, with 57.6% of research originating from well-resourced Asian contexts shaping assumptions that fail in smallholder settings.
Quality assessment reveals troubling patterns: blockchain prototype development (15.2% of studies) achieves the lowest quality rating (15.4/25 points), while qualitative case studies (8.1%) score highest (21.3/25)—a negative correlation (r = −0.06) reflecting accessibility bias where researchers develop models using public datasets (47% of AI/ML studies), publishing rapidly, lowering barriers for methodologically weak work. Research conducted predominantly by computer scientists (62% publication in CS/engineering journals versus 23% agricultural sciences) optimizes computational metrics while neglecting agricultural economics dimensions, evidenced by evaluation metric concentration in accuracy/performance (29.3%) and efficiency (24.4%) rather than marketing outcomes.
The field requires reorientation from technology-first logic (develop solutions, seek applications) toward problem-first approaches beginning with farmer-identified constraints. Multi-tier evaluation standards must replace current practice: progressing from technical validation (Tier 1, current focus) through contextual validation (Tier 2, largely absent), economic validation (Tier 3, 9.1% present), adoption validation (Tier 4, 5.1% present), to impact validation (Tier 5, entirely absent). Without this evolution, computational research risks producing technically sophisticated solutions to problems farmers do not prioritize, at costs they cannot afford, requiring infrastructure they lack.

5. Conclusions

This systematic review of 99 studies (2019–2025) reveals an efficiency paradox in Agriculture 4.0 technologies for agricultural marketing: despite 85–95% crop prediction accuracies, 80–92% recommendation system precision, and 20–30% blockchain fraud reduction, only 3% achieved sustained operational deployment exceeding 12 months. This disconnect between computational performance and field viability indicates fundamental misalignment between algorithmic research priorities and implementation requirements.

5.1. Key Contributions

This review provides three distinctive contributions. First, multi-dimensional maturity assessment demonstrates 77.8% of technologies remain at validation stages (TRL ≤ 5), only 22.2% achieve deployment readiness (TRL ≥ 6), and 90.9% lack economic validation—exposing systematic progression barriers beyond technical refinement. Second, methodology analysis reveals frequency–quality inversion: blockchain prototypes (15.2% of studies) achieve the lowest quality scores (15.4/25), while qualitative case studies (8.1%) score highest (21.3/25), with negative correlation (r = −0.06) indicating popular computational approaches do not generate superior evidence. Third, validation pattern analysis shows 69% of AI/ML studies employ k-fold cross-validation appropriate for algorithm comparison but insufficient for deployment readiness, while only 31% conduct temporal/geographic generalization testing, and field trials reveal 28–36% recommendation rejection rates invisible in laboratory validation.

5.2. Implications for Computational Research

The field requires methodological reorientation toward multi-tier evaluation: beyond technical validation (Tier 1, current focus) to contextual validation under deployment conditions (Tier 2), economic validation with cost–benefit analysis (Tier 3, 9.1% present), adoption validation tracking sustained use (Tier 4, 5.1% present), and impact validation measuring welfare changes (Tier 5, absent). Progression through the tiers should become publication standards for technologies at TRL ≥ 5.
Research conducted predominantly by computer scientists (62% publication in CS/engineering journals versus 23% agricultural sciences) optimizes computational metrics while neglecting implementation dimensions, evidenced by evaluation concentration in accuracy/performance (29.3%) and efficiency (2figure%) rather than outcome measures. Innovation logic must shift from the technology-first approaches (develop solutions, seek applications) evident in emerging technologies at TRL 3.0–3.5 with zero economic validation, toward problem-first frameworks beginning with constraint identification then selecting contextually appropriate solutions.

5.3. Critical Research Directions

Three computational research priorities emerge: (1) validation methodology development establishing protocols for temporal stability testing, geographic generalization assessment, and robustness evaluation under deployment conditions; (2) implementation science investigating universal post-2022 TRL stagnation across technologies and mechanisms enabling the 3% achieving sustained operation; (3) appropriate algorithm design optimizing for deployment constraints (missing data tolerance, computational efficiency, maintainability) rather than benchmark performance maximization.
Agriculture 4.0 technologies have demonstrated computational capability; the critical challenge is translating this into deployment viability through methodological maturation prioritizing field robustness over laboratory metrics.

Author Contributions

Introduction, M.E.A.-O.; methodology, M.A.-C. and M.B.B.; validation, M.E.A.-O.; formal analysis, M.E.A.-O. and M.A.-C.; investigation, M.E.A.-O., M.B.B. and M.A.-C.; resources, M.E.A.-O.; data curation, M.E.A.-O. and M.B.B.; writing—original draft preparation, M.E.A.-O.; writing—review and editing, M.A.-C. and M.E.A.-O.; visualization, M.E.A.-O.; supervision, M.B.B. and M.A.-C.; project administration, M.B.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research corresponds to the doctoral thesis of a student who received a scholarship from the Ministry of Science and Technology of the Republic of Colombia through the Bicentennial Scholarship Program, third cohort. The authors did not receive support from any organization for the work presented. No funding was received to assist in the preparation of this manuscript. No funding was received to carry out this study. No funds, grants, or any other type of support were received.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Composite Score Construction Method

To enable consistent comparison across heterogeneous technologies and study designs, composite indices were constructed for the Technical Performance Score and the Economic Viability Score. Each index aggregates multiple reported indicators into a standardized 0–100 scale.
Let x i j represent the value of indicator j for technology category i . Each indicator was normalized to a 0–1 interval using min–max scaling:
x i j = x i j m i n ( x j ) m a x ( x j ) m i n ( x j )
The Technical Performance Score aggregates normalized values for accuracy, reliability, latency, throughput, and functional completeness:
T P S i = ( 1 m j = 1 m x i j ) × 100
where m is the number of performance indicators available in each study.
The Economic Viability Score aggregates normalized values for implementation cost (inverted), infrastructure accessibility, presence of cost–benefit analysis, and reported return-on-investment evidence:
E V S i = ( 1 n k = 1 n x i k ) × 100
where n corresponds to the number of reported economic indicators.
To avoid introducing arbitrary bias, equal weighting was applied across indicators due to the absence of validated weighting schemes in the existing agricultural digitalization literature. The scores represent median composite values per technology category to minimize sensitivity to outlier studies. Bubble sizes in Figure 9 and Figure 14 reflect the number of studies contributing to each category-level estimate.
This procedure does not imply economic profitability or deployment readiness; instead, it provides a comparative analytic framework to visualize variation in maturity, feasibility, and the observed performance–viability gap across technologies.

References

  1. FAO. The State of Food and Agriculture 2019: Moving Forward on Food Loss and Waste Reduction; FAO: Rome, Italy, 2019. [Google Scholar]
  2. Gustavsson, J.; Cederberg, C.; Sonesson, U.; Van Otterdijk, R.; Meybeck, A. Global Food Losses and Food Waste: Extent, Causes and Prevention; FAO: Rome, Italy, 2011. [Google Scholar]
  3. Barrett, C.B. Agricultural markets and development. In Annual Review of Resource Economics; Annual Reviews: Palo Alto, CA, USA, 2021; Volume 13, pp. 1–24. [Google Scholar]
  4. Reardon, T.; Echeverria, R.; Berdegué, J.; Minten, B.; Liverpool-Tasie, S.; Tschirley, D.; Zilberman, D. Rapid transformation of food systems in developing regions: Highlighting the role of agricultural research & innovations. Agric. Syst. 2019, 172, 47–59. [Google Scholar] [CrossRef]
  5. Shepherd, A.W. Understanding Agricultural Marketing. Food and Agriculture; Organization of the United Nations (FAO): Rome, Italy, 2019. [Google Scholar]
  6. Vorley, B. Food, Inc. Corporate Concentration from Farm to Consumer; International Institute for Environment and Development (IIED): London, UK, 2018. [Google Scholar]
  7. Poulton, C.; Dorward, A.; Kydd, J. The future of small farms: New directions for services, institutions, and intermediation. World Dev. 2010, 38, 1413–1428. [Google Scholar] [CrossRef]
  8. Rose, D.C.; Wheeler, R.; Winter, M.; Lobley, M.; Chivers, C.A. Agriculture 4.0: Making it work for people, production, and the planet. Land Use Policy 2021, 100, 104933. [Google Scholar] [CrossRef]
  9. Wolfert, S.; Ge, L.; Verdouw, C.; Bogaardt, M.J. Big Data in Smart Farming—A review. Agric. Syst. 2017, 153, 69–80. [Google Scholar] [CrossRef]
  10. Saiz-Rubio, V.; Rovira-Más, F. From Smart Farming towards Agriculture 5.0: A Review on Crop Data Management. Agronomy 2020, 10, 207. [Google Scholar] [CrossRef]
  11. Bacco, M.; Barsocchi, P.; Ferro, E.; Gotta, A.; Ruggeri, M. The digitisation of agriculture: A survey of research activities on smart farming. Array 2019, 3, 100009. [Google Scholar] [CrossRef]
  12. Klerkx, L.; Jakku, E.; Labarthe, P. A review of social science on digital agriculture, smart farming and agriculture 4.0: New contributions and a future research agenda. NJAS Wagening. J. Life Sci. 2019, 90, 100315. [Google Scholar] [CrossRef]
  13. Kamilaris, A.; Fonts, A.; Prenafeta-Boldύ, F.X. The rise of blockchain technology in agriculture and food supply chains. Trends Food Sci. Technol. 2019, 91, 640–652. [Google Scholar] [CrossRef]
  14. Tian, F. A supply chain traceability system for food safety based on HACCP, blockchain & Internet of Things. In Proceedings of the 2017 International Conference on Service Systems and Service Management, Dalian, China, 16–18 June 2017. [Google Scholar]
  15. Verdouw, C.; Tekinerdogan, B.; Beulens, A.; Wolfert, S. Digital twins in smart farming. Agric. Syst. 2021, 189, 103046. [Google Scholar] [CrossRef]
  16. Song, C.; Dong, H. Application of Intelligent Recommendation for Agricultural Information: A Systematic Literature Review. IEEE Access 2021, 9, 153616–153632. [Google Scholar] [CrossRef]
  17. Van Evert, F.K.; Fountas, S.; Jakovetic, D.; Crnojevic, V.; Travlos, I.; Kempenaar, C. Big Data for weed control and crop protection. Weed Res. 2017, 57, 218–233. [Google Scholar] [CrossRef]
  18. Patrício, D.I.; Rieder, R. Computer vision and artificial intelligence in precision agriculture for grain crops: A systematic review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef]
  19. Fielke, S.; Taylor, B.; Jakku, E. Digitalisation of agricultural knowledge and advice networks: A state-of-the-art review. Agric. Syst. 2020, 180, 102763. [Google Scholar] [CrossRef]
  20. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef]
  21. Caraka, R.E.; Chen, R.C.; Toharudin, T.; Gio, P.U.; Pardamean, B. How big data and machine learning impact agriculture. J. Big Data 2021, 8, 1–28. [Google Scholar]
  22. C. A. S. P. (CASP). CASP Qualitative Checklist. Available, 2018. Available online: https://casp-uk.net/casp-tools-checklists/ (accessed on 8 November 2025).
  23. Baas, J.; Schotten, M.; Plume, A.; Côté, G.; Karimi, R. Scopus as a curated, high-quality bibliometric data source for academic research in quantitative science studies. Quant. Sci. Stud. 2020, 1, 377–386. [Google Scholar] [CrossRef]
  24. Birkle, C.; Pendlebury, D.A.; Schnell, J.; Adams, J. Web of Science as a data source for research on scientific and scholarly activity. Quant. Sci. Stud. 2020, 1, 363–376. [Google Scholar] [CrossRef]
  25. IEEE. IEEE Xplore Digital Library 2024. Available online: https://ieeexplore.ieee.org/ (accessed on 8 November 2025).
  26. McHugh, M.L. Interrater reliability: The kappa statistic. Biochem. Medica 2012, 22, 276–282. [Google Scholar] [CrossRef]
  27. Annane, B.; Alti, A.; Lakehal, A. A Blockchain Semantic-based Approach for Secure and Traceable Agri-Food Supply Chain. Eng. Technol. Appl. Sci. Res. 2024, 14, 18131–18137. [Google Scholar] [CrossRef]
  28. Bhatia, S.; Albarrak, A.S. A Blockchain-Driven Food Supply Chain Management Using QR Code and XAI-Faster RCNN Architecture. Sustainability 2023, 15, 2579. [Google Scholar] [CrossRef]
  29. Thilakarathne, N.N.; Bakar, M.S.A.; Abas, P.E.; Yassin, H. A Cloud Enabled Crop Recommendation Platform for Machine Learning-Driven Precision Farming. Sensors 2022, 22, 6299. [Google Scholar] [CrossRef]
  30. Shinde, A.V.; Patil, D.D.; Tripathi, K.K. A Comprehensive Survey on Recommender Systems Techniques and Challenges in Big Data Analytics with IOT Applications. Rev. Gestão Soc. Ambient. 2024, 18, e05195. [Google Scholar] [CrossRef]
  31. Rslan, E.; Khafagy, M.H.; Ali, M.; Munir, K.; Badry, R.M. AgroSupportAnalytics: Big data recommender system for agricultural farmer complaints in Egypt. Int. J. Electr. Comput. Eng. 2023, 13, 746–755. [Google Scholar] [CrossRef]
  32. Kamatchi, S.B.; Parvathi, R. Improvement of Crop Production Using Recommender System by Weather Forecasts. Procedia Comput. Sci. 2019, 165, 724–732. [Google Scholar] [CrossRef]
  33. Bhola, A.; Kumar, P. ML-CSFR: A Unified Crop Selection and Fertilizer Recommendation Framework based on Machine Learning. Scalable Comput. Pr. Exp. 2024, 25, 4111–4127. [Google Scholar] [CrossRef]
  34. Kiruthika, S.; Karthika, D. IOT-BASED professional crop recommendation system using a weight-based long-term memory approach. Meas. Sens. 2023, 27, 100722. [Google Scholar] [CrossRef]
  35. Bouni, M.; Hssina, B.; Douzi, K.; Douzi, S. Towards an Efficient Recommender Systems in Smart Agriculture: A deep reinforcement learning approach. Procedia Comput. Sci. 2022, 203, 825–830. [Google Scholar] [CrossRef]
  36. Paithane, P.M. Random forest algorithm use for crop recommendation. J. Eng. Technol. Ind. Appl. 2023, 9, 34–41. [Google Scholar] [CrossRef]
  37. Zhou, Y.; Hua, S. Recommendation of Business Models for Agriculture-Related Platforms Based on Deep Learning. Comput. Intell. Neurosci. 2022, 2022, 7330078. [Google Scholar] [CrossRef]
  38. Fayyaz, Z.; Ebrahimian, M.; Nawara, D.; Ibrahim, A.; Kashef, R. Recommendation systems: Algorithms, challenges, metrics, and business opportunities. Appl. Sci. 2020, 10, 7748. [Google Scholar] [CrossRef]
  39. Guixia, X.; Samian, N.; Faizal, M.F.M.; As’ad, M.A.Z.M.; Fadzil, M.F.M.; Abdullah, A.; Seah, W.K.G.; Ishak, M.; Hermadi, I. A Framework for Blockchain and Internet of Things Integration in Improving Food Security in the Food Supply Chain. J. Adv. Res. Appl. Sci. Eng. Technol. 2024, 34, 24–37. [Google Scholar] [CrossRef]
  40. Kechagias, E.P.; Gayialis, S.P.; Papadopoulos, G.A.; Papoutsis, G. An Ethereum-Based Distributed Application for Enhancing Food Supply Chain Traceability. Foods 2023, 12, 1220. [Google Scholar] [CrossRef] [PubMed]
  41. Zou, Y.; Gao, Q.; Wu, H.; Liu, N. Carbon-Efficient Scheduling in Fresh Food Supply Chains with a Time-Window-Constrained Deep Reinforcement Learning Model. Sensors 2024, 24, 7461. [Google Scholar] [CrossRef]
  42. Saban, M.; Bekkour, M.; Amdaouch, I.; El Gueri, J.; Ahmed, B.A.; Chaari, M.Z.; Ruiz-Alzola, J.; Rosado-Muñoz, A.; Aghzout, O. A Smart Agricultural System Based on PLC and a Cloud Computing Web Application Using LoRa and LoRaWan. Sensors 2023, 23, 2725. [Google Scholar] [CrossRef]
  43. Schilhabel, S.; Sankaranarayanan, B.; Basu, C.; Madan, M.; Glennan, C.; McSherry, L. Blockchain technology in the food supply chain: Influences on supplier relationships and outcomes. Issues Inf. Syst. 2023, 24., 321–332. [Google Scholar]
  44. Wang, W.; Cao, Y.; Chen, Y.; Liu, C.; Han, X.; Zhou, B.; Wang, W. Assessing the adoption barriers for AI in food supply chain finance applying a hybrid interval-valued Fermatean fuzzy CRITIC-ARAS model. Sci. Rep. 2024, 14, 27834. [Google Scholar] [CrossRef]
  45. Umami, I.; Rahmawati, L. Comparing Epsilon Greedy and Thompson Sampling model for Multi-Armed Bandit algorithm on Marketing Dataset. J. Appl. Data Sci. 2021, 2. [Google Scholar] [CrossRef]
  46. Ramanathan, R.; Duan, Y.; Ajmal, T.; Pelc, K.; Gillespie, J.; Ahmadzadeh, S.; Condell, J.; Hermens, I.; Ramanathan, U. Motivations and Challenges for Food Companies in Using IoT Sensors for Reducing Food Waste: Some Insights and a Road Map for the Future. Sustainability 2023, 15, 1665. [Google Scholar] [CrossRef]
  47. Zhang, H.; Qin, X.; Zheng, H. Research on Contextual Recommendation System of Agricultural Science and Technology Resource Based on User Portrait. J. Phys. Conf. Ser. 2020, 1693, 012186. [Google Scholar] [CrossRef]
  48. Chandan, A.; John, M.; Potdar, V. Achieving UN SDGs in Food Supply Chain Using Blockchain Technology. Sustainability 2023, 15, 2109. [Google Scholar] [CrossRef]
  49. Kang, Z.; Zhao, Y.; Chen, L.; Guo, Y.; Mu, Q.; Wang, S. Advances in Machine Learning and Hyperspectral Imaging in the Food Supply Chain. Food Eng. Rev. 2022, 14, 596–616. [Google Scholar] [CrossRef]
  50. Szabo, P.; Genge, B. Efficient Behavior Prediction Based on User Events. J. Commun. Softw. Syst. 2021, 17, 134–142. [Google Scholar] [CrossRef]
  51. Osmond, A.B.; Hidayat, F.; Supangkat, S.H. Electronic Commerce Product Recommendation using Enhanced Conjoint Analysis. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 2021, 12. [Google Scholar] [CrossRef]
  52. Toader, D.-C.; Rădulescu, C.M.; Toader, C. Investigating the Adoption of Blockchain Technology in Agri-Food Supply Chains: Analysis of an Extended UTAUT Model. Agriculture 2024, 14, 614. [Google Scholar] [CrossRef]
  53. Prakash, M.C.; Saravanan, P. Crop Insurance Premium Recommendation System Using Artificial Intelligence Techniques. Int. J. Prof. Bus. Rev. 2023, 8, 17. [Google Scholar]
  54. Patidar, S.; Sukhwani, V.K.; Shukla, A.C. Modeling of Critical Food Supply Chain Drivers Using DEMATEL Method and Blockchain Technology. J. Inst. Eng. (India) Ser. C 2023, 104, 541–552. [Google Scholar] [CrossRef]
  55. Mbadlisa, G.; Jokonya, O. Factors Affecting the Adoption of Blockchain Technologies in the Food Supply Chain. Front. Sustain. Food Syst. 2024, 8, 1497599. [Google Scholar] [CrossRef]
  56. Sharma, S.; Kumar, N.; Kaswan, K.S. Machine Learning Approach for Prediction of the Online User Intention for a Product PurchaseMachine Learning Approach for Prediction of the Online User Intention for a Product Purchase. Int. J. Recent Innov. Trends Comput. Commun. 2023, 11, 43–51. [Google Scholar]
  57. Zhang, Y.; Wu, X.; Ge, H.; Jiang, Y.; Sun, Z.; Ji, X.; Jia, Z.; Cui, G. A Blockchain-Based Traceability Model for Grain and Oil Food Supply Chain. Foods 2023, 12, 3235. [Google Scholar] [CrossRef] [PubMed]
  58. Xu, J.; Han, J.; Qi, Z.; Jiang, Z.; Xu, K.; Zheng, M.; Zhang, X. A Reliable Traceability Model for Grain and Oil Quality Safety Based on Blockchain and Industrial Internet. Sustainability 2022, 14, 15144. [Google Scholar] [CrossRef]
  59. George, W.; Al-Ansari, T. GM-Ledger: Blockchain-Based Certificate Authentication for International Food Trade. Foods 2023, 12, 3914. [Google Scholar] [CrossRef]
  60. Shao, P.; Kamaruddin, N.S.; Wang, D. Huang, Integration and Analysis of Data in Grain Quality and Safety Traceability Using Blockchain Technology. J. Logist. Inform. Serv. Sci. 2023, 10, 47–61. [Google Scholar]
  61. Astuti, R.; Hidayati, L. How might blockchain technology be used in the food supply chain? A systematic literature review. Cogent Bus. Manag. 2023, 10, 2246739. [Google Scholar] [CrossRef]
  62. Schmidt, D.; Casagranda, L.F.; Butturi, M.A.; Sellitto, M.A. Digital Technologies, Sustainability, and Efficiency in Grain Post-Harvest Activities: A Bibliometric Analysis. Sustainability 2024, 16, 1244. [Google Scholar] [CrossRef]
  63. González-Mendes, S.; Alonso-Muñoz, S.; García-Muiña, F.E.; González-Sánchez, R. Discovering the conceptual building blocks of blockchain technology applications in the agri-food supply chain: A review and re-search agenda. Br. Food J. 2024, 126, 182–206. [Google Scholar] [CrossRef]
  64. Dastidar, U.G.; Ambekar, S.S.; Hudnurkar, M.; Lidbe, A.D. Experiential Retailing Leveraged by Data Analytics. Int. J. Bus. Intell. Res. 2021, 12, 98–113. [Google Scholar] [CrossRef]
  65. Mohammed, A.; Potdar, V.; Quaddus, M. Exploring Factors and Impact of Blockchain Technology in the Food Supply Chains: An Exploratory Study. Foods 2023, 12, 2052. [Google Scholar] [CrossRef]
  66. Quiroz-Flores, J.C.; Aguado-Rodriguez, R.J.; Zegarra-Aguinaga, E.A.; Collao-Diaz, M.F.; Flores-Perez, A.E. Industry 4.0, Circular Economy and Sustainability in the Food Industry: A Literature Review. Int. J. Ind. Eng. Oper. Manag. 2023, 6, 1–24. [Google Scholar] [CrossRef]
  67. Miller, T.; Mikiciuk, G.; Durlik, I.; Mikiciuk, M.; Łobodzińska, A.; Śnieg, M. The IoT and AI in Agriculture: The Time Is Now—A Systematic Review of Smart Sensing Technologies. Sensors 2025, 25, 3583. [Google Scholar] [CrossRef]
  68. Demestichas, K.; Peppes, N.; Alexakis, T.; Adamopoulou, E. Blockchain in Agriculture Traceability Systems: A Review. Appl. Sci. 2020, 10, 4113. [Google Scholar] [CrossRef]
  69. Sendros, A.; Drosatos, G.; Efraimidis, P.S.; Tsirliganis, N.C. Blockchain Applications in Agriculture: A Scoping Review. Appl. Sci. 2022, 12, 8016. [Google Scholar] [CrossRef]
  70. Rogers, E.M. Diffusion of Innovations, 5th ed.; Free Press: New York, NY, USA, 2003. [Google Scholar]
Figure 1. PRISMA flowchart.
Figure 1. PRISMA flowchart.
Informatics 13 00014 g001
Figure 2. Temporal distribution of publications 2019–2025. Note. 2024 data collected through September 2024; 2025 through February 2025. 2000% growth from 2019 to 2023 (peak). Average annual growth rate: 108.4%.
Figure 2. Temporal distribution of publications 2019–2025. Note. 2024 data collected through September 2024; 2025 through February 2025. 2000% growth from 2019 to 2023 (peak). Average annual growth rate: 108.4%.
Informatics 13 00014 g002
Figure 3. Geographic distribution. (A) Publications by Region. (B) Publications by Countries.
Figure 3. Geographic distribution. (A) Publications by Region. (B) Publications by Countries.
Informatics 13 00014 g003
Figure 4. Research focus and thematic analysis. (A). Primary Research Objectives Distribution. (B) Proportional Distribution. Note: Agricultural Recommendation System dominate at 28.3%, while Direct Marketing Platforms remain underrepresented (4%) categories with <5 studies grouped as “other categories” in pie chart.
Figure 4. Research focus and thematic analysis. (A). Primary Research Objectives Distribution. (B) Proportional Distribution. Note: Agricultural Recommendation System dominate at 28.3%, while Direct Marketing Platforms remain underrepresented (4%) categories with <5 studies grouped as “other categories” in pie chart.
Informatics 13 00014 g004
Figure 5. Methodological approaches distribution. (A). Methodological Approaches Frequency. (B). Research Quality by Methodology. (C). Frequency vs. Quality Trade-off. Note: Quality scores were derived from journal quality indicators (SJR quartile and impact metrics) as part of the quality assessment procedure described in Section 2.6). Inverse correlation (r = −0.37) suggests technical sophistication may obscure implementation rigor.
Figure 5. Methodological approaches distribution. (A). Methodological Approaches Frequency. (B). Research Quality by Methodology. (C). Frequency vs. Quality Trade-off. Note: Quality scores were derived from journal quality indicators (SJR quartile and impact metrics) as part of the quality assessment procedure described in Section 2.6). Inverse correlation (r = −0.37) suggests technical sophistication may obscure implementation rigor.
Informatics 13 00014 g005
Figure 6. Supply chain stages addressed. (A). Supply Chain Coverage Distribution. (B). Supply Chain Flow Analysis. Note: 35.4% end-to-end coverage, others emphasize production-commercialization stages. Most frequent stage: primarily commercialization (31.3%), despite 36.4% claiming “full-chain”. 31–35 studies in distribution/sales marketing reported “commercialization” as focus stage.
Figure 6. Supply chain stages addressed. (A). Supply Chain Coverage Distribution. (B). Supply Chain Flow Analysis. Note: 35.4% end-to-end coverage, others emphasize production-commercialization stages. Most frequent stage: primarily commercialization (31.3%), despite 36.4% claiming “full-chain”. 31–35 studies in distribution/sales marketing reported “commercialization” as focus stage.
Informatics 13 00014 g006
Figure 7. Technology landscape and applications. Note: Based on primary technology focus per study (n = 99). “Others” category includes diverse technologies not directly classifiable into main categories (e.g., general digital platforms, multiple technology combinations).
Figure 7. Technology landscape and applications. Note: Based on primary technology focus per study (n = 99). “Others” category includes diverse technologies not directly classifiable into main categories (e.g., general digital platforms, multiple technology combinations).
Informatics 13 00014 g007
Figure 8. Perspectives on the maturation of technology. (A) Research Volume vs. Implementation Maturity. (B) Implementation Maturity vs. Economic Validation. Note: Bubble size represents number of studies (Heat AI or technical performance score: (B)). Most studied technologies (AI/ML, Recommendations) show lower TRL than less studied (IoT). Economic validation rare across all technologies.
Figure 8. Perspectives on the maturation of technology. (A) Research Volume vs. Implementation Maturity. (B) Implementation Maturity vs. Economic Validation. Note: Bubble size represents number of studies (Heat AI or technical performance score: (B)). Most studied technologies (AI/ML, Recommendations) show lower TRL than less studied (IoT). Economic validation rare across all technologies.
Informatics 13 00014 g008
Figure 9. Technology viability matrix: technical performance versus economic feasibility in agricultural marketing optimization technologies (n = 99). Note: Economic Viability Score based on implementation cost (<5 K = high, >50 K = low), documented ROI (yes = +20 pts), economic analysis coverage (+1pt per %), infrastructure accessibility. Technical Performance Score based on: reported accuracy metrics (0–100), reliability measures, efficiency indicators, functional completeness. Key Finding: Most technologies cluster in “Efficiency Paradox” (high technical, low economic) rather than “Ideal Zone”.
Figure 9. Technology viability matrix: technical performance versus economic feasibility in agricultural marketing optimization technologies (n = 99). Note: Economic Viability Score based on implementation cost (<5 K = high, >50 K = low), documented ROI (yes = +20 pts), economic analysis coverage (+1pt per %), infrastructure accessibility. Technical Performance Score based on: reported accuracy metrics (0–100), reliability measures, efficiency indicators, functional completeness. Key Finding: Most technologies cluster in “Efficiency Paradox” (high technical, low economic) rather than “Ideal Zone”.
Informatics 13 00014 g009
Figure 10. Technology readiness level distribution: implementation maturity assessment (n = 99).
Figure 10. Technology readiness level distribution: implementation maturity assessment (n = 99).
Informatics 13 00014 g010
Figure 11. Deployment stage distribution by technology.
Figure 11. Deployment stage distribution by technology.
Informatics 13 00014 g011
Figure 12. Implementation of barrier severity matrix.
Figure 12. Implementation of barrier severity matrix.
Informatics 13 00014 g012
Figure 13. Technology maturity evolution timeline (2019–2025): TRL progression trajectories.
Figure 13. Technology maturity evolution timeline (2019–2025): TRL progression trajectories.
Informatics 13 00014 g013
Figure 14. Deployment readiness scorecard: multi-dimensional maturity assessment.
Figure 14. Deployment readiness scorecard: multi-dimensional maturity assessment.
Informatics 13 00014 g014
Figure 15. Evaluation metrics distribution across 41 studies with explicit performance assessment. (A). Distribution of Evaluation Metric Categories. (B). Proportional Distribution of Evaluation Approaches. Note: Total 41 studies with explicit evaluation metrics. Accuracy/Performance dominates (29.3%) while economic validation remains absent.
Figure 15. Evaluation metrics distribution across 41 studies with explicit performance assessment. (A). Distribution of Evaluation Metric Categories. (B). Proportional Distribution of Evaluation Approaches. Note: Total 41 studies with explicit evaluation metrics. Accuracy/Performance dominates (29.3%) while economic validation remains absent.
Informatics 13 00014 g015
Figure 16. Technology performance comparison. (A) Research Volume by Technology Category. (B) Reported Performance Metrics by Technology. Note: Performance ranges represent reported results across studies. High technical performance (80–95% accuracy) contrasts with absent economic validation.
Figure 16. Technology performance comparison. (A) Research Volume by Technology Category. (B) Reported Performance Metrics by Technology. Note: Performance ranges represent reported results across studies. High technical performance (80–95% accuracy) contrasts with absent economic validation.
Informatics 13 00014 g016
Figure 17. Key performance indicators.
Figure 17. Key performance indicators.
Informatics 13 00014 g017
Figure 18. Technology performance versus economic validation. # denotes the number of articles included in the analysis for each technology category. Bubble size = # articles indicates that the size of each bubble is proportional to the number of studies (articles) reviewed for that specific tech-nology.
Figure 18. Technology performance versus economic validation. # denotes the number of articles included in the analysis for each technology category. Bubble size = # articles indicates that the size of each bubble is proportional to the number of studies (articles) reviewed for that specific tech-nology.
Informatics 13 00014 g018
Figure 19. Technology-specific barrier severity assessment.
Figure 19. Technology-specific barrier severity assessment.
Informatics 13 00014 g019
Figure 20. Barrier–opportunity relationship diagram.
Figure 20. Barrier–opportunity relationship diagram.
Informatics 13 00014 g020
Figure 21. Barrier reporting frequency versus estimated deployment impact.
Figure 21. Barrier reporting frequency versus estimated deployment impact.
Informatics 13 00014 g021
Figure 22. Opportunity technical and economic feasibility assessment.
Figure 22. Opportunity technical and economic feasibility assessment.
Informatics 13 00014 g022
Table 1. Regional distribution.
Table 1. Regional distribution.
RegionNo. of
Studies
PercentageLeading Countries
Asia5757.6%India (28), China (12), Malaysia (4), Indonesia (3), Pakistan (2), Saudi Arabia (2)
Europe1717.2%UK (5), Spain (3), Italy (3), Romania (2), Greece (1), Serbia (1), Ireland (1), Lithuania (1)
Africa1414.1%South Africa (3), Morocco (2), Egypt (2), Somalia (2), Nigeria (1), Algeria (1), Ethiopia (1), Kenya (1), Mali (1)
North America55.1%USA (5)
Latin America55.1%Peru (2), Brazil (1), Chile (1), Colombia (1)
Oceania11.0%Australia (1)
Table 2. Leading publication venues and quality indicators.
Table 2. Leading publication venues and quality indicators.
JournalNo. of ArticlesSJR QuartileH-IndexFocus Area
Sustainability (MDPI)12Q297Sustainability science, multidisciplinary
IEEE Access8Q2127Computer science, engineering
Sensors (MDPI)6Q1145Sensor technology, IoT
Agricultural Systems5Q1115Agricultural sciences, systems
Computers and Electronics in Agriculture5Q1127Agricultural technology, informatics
Foods (MDPI)4Q176Food science, supply chain
Applied Sciences (MDPI)4Q273Applied sciences, multidisciplinary
Information3Q265Information science
Agriculture (MDPI)3Q259Agricultural sciences
Blockchain: Research and Applications3Q128Blockchain technology (new journal)
Journal of Physics: Conference Series3Q388Conference proceedings
Frontiers in Sustainable Food Systems3Q242Food systems, sustainability
Scientific Reports2Q1200Multidisciplinary sciences
International Journal of Advanced Computer Science and Applications2Q351Computer science
Other journals (1 article each)36Various--
Table 3. Evaluation metrics.
Table 3. Evaluation metrics.
CategoriesMetricsReferences
Accuracy and PerformanceFraud detection accuracy[27]
Accuracy (99.53%)[28]
Accuracy, Recall, F1-score and K-Fold cross validation[29]
Accuracy, Recall, MAE, RMSE[30]
Accuracy, Recall, F1-score, accuracy, RMSE, MAE and NMAE[31]
Precision, recall and prediction error[32]
Accuracy, recall, F1-score, AUC[33]
Accuracy (95%), recall and run time[34]
Model accuracy and correct classification rate[35]
Model accuracy, achieving 99.09% accuracy with Random Forest[36]
Accuracy (P), recall (R) and F1-score[37]
Accuracy (99.2% with Decision Trees), recall, F1-score[38]
Time and EfficiencyReduced search time (~7.5× faster than traditional methods)[27]
Latency, transaction rate, throughput, scalability, and interoperability[39]
Reduced traceability time, data accuracy, and reliability[40]
Efficiency in reducing carbon emissions, reduced logistics costs[41]
Evaluation of packet loss, transmission latency, LoRa communication quality[42]
Product traceability times in the supply chain, food fraud reduction[43]
Barrier significance assessment based on interdependent criteria[44]
Overall performance and statistical significance[45]
Food waste reduction[46]
Safety, scalability, consensus efficiency[47]
Predictive Modelling and Statistical AnalysisRegression coefficient[48]
R2, RMSE, SEP, CCR[49]
Mean Squared Error (MSE) and Root Mean Squared Error (RMSE)[50]
Relative Error, Correlation, RMSE[51]
Coefficient of Determination (R2)[52]
Gradient Boosting Regressor with an accuracy of 93.58%.[53]
Interrelation analysis using GTMA[54]
Frequency analysis and ANOVA[55]
Mean Absolute Deviation (MAD), Mean Magnitude of Relative Error (MMRE)[56]
Security and Access ControlData security, reduction of fraud in the supply chain[57]
Data security, traceability efficiency, reduction of query times[58]
Implementation of data encryption for privacy protection[59]
Quality assessment according to food safety standards[60]
Security, scalability, consensus efficiency[61]
Qualitative and Bibliometric AnalysisThe article uses bibliometric metrics such as publication and citation count.[62]
Cooccurrence analysis, number of publications, citations[63]
Word Frequency, Word Cloud[64]
Qualitative analysis with no specific quantitative metrics[65]
Ranking factors and AHP to rank Industry 4.0 tools[66]
Table 4. Digital technologies and comparative performance metrics reported in the studies.
Table 4. Digital technologies and comparative performance metrics reported in the studies.
TechnologyNo. of ArticlesMain Metrics ReportedRange of ResultsApplication Examples
Blockchain15Data security, latency, transaction costsFraud reduction 20–30%; latency < 2 s per transactionIBM Food Trust, Hyperledger, OpenSC
AI/ML20Accuracy, recall, RMSE, R2Accuracy 85–95%; RMSE 0.12–0.35Crop prediction, classification models
IoT12Latency, packet loss, energy efficiencyLatency < 1 s; >95% successful data transmissionIoT sensors, ESP32, UAVs in traceability
Recommendation systems28Precision, recall, MAE, F1-scoreAccuracy 80–92%; F1-score 0.78–0.89Crop, fertilizer, and e-commerce recommendations
Big Data/Cloud10Processing time, scalabilityLarge datasets processed in seconds; cost reduction 15%Hadoop, MapReduce, NoSQL
Augmented reality/3D5Usability, operation times20% reduction in training time for operatorsAR.js, Three.js, OpenCV
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arzuaga-Ochoa, M.E.; Acosta-Coll, M.; Barrios Barrios, M. The Validation–Deployment Gap in Agricultural Information Systems: A Systematic Technology Readiness Assessment. Informatics 2026, 13, 14. https://doi.org/10.3390/informatics13010014

AMA Style

Arzuaga-Ochoa ME, Acosta-Coll M, Barrios Barrios M. The Validation–Deployment Gap in Agricultural Information Systems: A Systematic Technology Readiness Assessment. Informatics. 2026; 13(1):14. https://doi.org/10.3390/informatics13010014

Chicago/Turabian Style

Arzuaga-Ochoa, Mary Elsy, Melisa Acosta-Coll, and Mauricio Barrios Barrios. 2026. "The Validation–Deployment Gap in Agricultural Information Systems: A Systematic Technology Readiness Assessment" Informatics 13, no. 1: 14. https://doi.org/10.3390/informatics13010014

APA Style

Arzuaga-Ochoa, M. E., Acosta-Coll, M., & Barrios Barrios, M. (2026). The Validation–Deployment Gap in Agricultural Information Systems: A Systematic Technology Readiness Assessment. Informatics, 13(1), 14. https://doi.org/10.3390/informatics13010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop