Next Article in Journal
MoE-World: A Mixture-of-Experts Architecture for Multi-Task World Models
Previous Article in Journal
Digital Twins Under EU Law: A Unified Compliance Framework Across Smart Cities, Industry, Transportation, and Energy Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Multi-Agent Coordination Strategies vs Retrieval-Augmented Generation in LLMs: A Comparative Evaluation

1
Intelligent Systems Department, Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
2
Faculty of Informatics and Mathematics, Trakia University, 6000 Stara Zagora, Bulgaria
3
Bulgarian Academy of Sciences, 1040 Sofia, Bulgaria
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(24), 4883; https://doi.org/10.3390/electronics14244883
Submission received: 15 November 2025 / Revised: 7 December 2025 / Accepted: 8 December 2025 / Published: 11 December 2025

Abstract

This paper evaluates multi-agent coordination strategies against single-agent retrieval-augmented generation (RAG) for open-source language models. Four coordination strategies (collaborative, sequential, competitive, hierarchical) were tested across Mistral 7B, Llama 3.1 8B, and Granite 3.2 8B using 100 domain-specific question–answer pairs (3100 total evaluations). Performance was assessed using Composite Performance Score (CPS) and Threshold-aware CPS (T-CPS), aggregating nine metrics spanning lexical, semantic, and linguistic dimensions. Under the tested conditions, all 28 multi-agent configurations showed degradation relative to single-agent baselines, ranging from −4.4% to −35.3%. Coordination overhead was identified as a primary contributing factor. Llama 3.1 8B tolerated Sequential and Hierarchical coordination with minimal degradation (−4.9% to −5.3%). Mistral 7B with shared context retrieval achieved comparable results. Granite 3.2 8B showed degradation of 14–35% across all strategies. Collaborative coordination exhibited the largest degradation across all models. Study limitations include evaluation on a single domain (agriculture), use of 7–8B parameter models, and homogeneous agent architectures. These findings suggest that single-agent RAG may be preferable for factual question-answering tasks in local deployment scenarios with computational constraints. Future research should explore larger models, heterogeneous agent teams, role-specific prompting, and advanced consensus mechanisms.
Keywords: retrieval-augmented generation (RAG); multi-agent coordination strategies; large language models (LLMs); comparative evaluation; performance evaluation retrieval-augmented generation (RAG); multi-agent coordination strategies; large language models (LLMs); comparative evaluation; performance evaluation

Share and Cite

MDPI and ACS Style

Radeva, I.; Popchev, I.; Doukovska, L.; Dimitrova, M. Multi-Agent Coordination Strategies vs Retrieval-Augmented Generation in LLMs: A Comparative Evaluation. Electronics 2025, 14, 4883. https://doi.org/10.3390/electronics14244883

AMA Style

Radeva I, Popchev I, Doukovska L, Dimitrova M. Multi-Agent Coordination Strategies vs Retrieval-Augmented Generation in LLMs: A Comparative Evaluation. Electronics. 2025; 14(24):4883. https://doi.org/10.3390/electronics14244883

Chicago/Turabian Style

Radeva, Irina, Ivan Popchev, Lyubka Doukovska, and Miroslava Dimitrova. 2025. "Multi-Agent Coordination Strategies vs Retrieval-Augmented Generation in LLMs: A Comparative Evaluation" Electronics 14, no. 24: 4883. https://doi.org/10.3390/electronics14244883

APA Style

Radeva, I., Popchev, I., Doukovska, L., & Dimitrova, M. (2025). Multi-Agent Coordination Strategies vs Retrieval-Augmented Generation in LLMs: A Comparative Evaluation. Electronics, 14(24), 4883. https://doi.org/10.3390/electronics14244883

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop