Intelligent Agents and Their Application

A special issue of Future Internet (ISSN 1999-5903).

Deadline for manuscript submissions: 31 August 2026 | Viewed by 15512

Special Issue Editors


E-Mail Website
Guest Editor
1. Intelligent Systems Department, Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
2. Computer Systems Department, Plovdiv University “Paisii Hilendarski”, 4000 Plovdiv, Bulgaria
Interests: artificial intelligence; multi-agent systems; intelligent agents; virtual–physical spaces; semantic modeling; smart agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Systems Department, Plovdiv University “Paisii Hilendarski”, 4000 Plovdiv, Bulgaria
Interests: knowledge presentation; semantic web; ontologies; software engineering; smart agriculture; e-learning

E-Mail Website
Guest Editor
1. Computer Systems Department, Plovdiv University “Paisii Hilendarski”, 4000 Plovdiv, Bulgaria
2. Intelligent Systems Department, Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
Interests: knowledge representation; logic programming; intelligent agents; e-learning; smart agriculture

Special Issue Information

Dear Colleagues,

The use of intelligent agents is an adequate approach for modeling, designing, and implementing distributed systems with autonomous components. The first ideas for intelligent agents can be found in the classic works of artificial intelligence research by John McCarthy and Marvin Minsky. One of the most important environments for intelligent agents is the Internet. Intelligent technologies, and particularly intelligent agents, are the basis of many Internet tools, such as search engines, recommendation systems, and website aggregators. In modern artificial intelligence, the period of the emergence of the agent-oriented paradigm is considered to be the mid-1990s (after the two so-called “AI winters”).

The evolution of agent-oriented technologies continues to this day, with increasing applications in various fields involving agents. It is now widely accepted that sensor systems (vision, sound, speech recognition, etc.) cannot provide completely reliable information about the environment. Therefore, reasoning and planning systems must be able to cope with uncertainty. Another important consequence of the agent-oriented paradigm is that artificial intelligence has been "brought" into much closer contact with other fields, such as control theory and economics, which also deal with agents. For example, advances in the control of robotic cars are the result of integrating different approaches, ranging from better sensors, the theoretical integration of sensors, localization and mapping, and high-level planning.

Unfortunately, there is no universally accepted definition of the term “agent”. However, there is a consensus that autonomy is a central property of agents. One obstacle to a broader understanding of this topic is that individual characteristics of agents have different meanings for different application domains. For example, for some applications, the ability of agents to learn from experience is of paramount importance, while for other applications, learning is not only unimportant, but even undesirable. An acceptable and widely accepted definition is that an intelligent agent is a computer system that can operate autonomously, reactively, proactively, and socially in some environment to achieve delegated goals. One of the most widely used architectures for intelligent agents is the BDI (Beliefs–Desires–Intentions) model, which designed to support the process of rational decision-making as practical reasoning and, at the same time, as representative of symbolic AI.

Currently, there is an increased practical and scientific interest in the topic of intelligent agents related to generative AI and the ReAct architecture, known as AI agents, which are also representatives of sub-symbolic AI. The rise in AI agents is reshaping how we build and think about the development of agent-oriented applications. AI agents are a new way to build agent-oriented and multi-agent systems, where they use Large Language Models (LLMs) as reasoning engines. The expectations are that, in the future, every person and every business will have their own personal assistant implemented as an intelligent agent. AI agents are at the heart of the concept of Chain of Thought (CoT).

There has been a recent trend in seeking a rational argument for integrating symbolic and sub-symbolic AI, which have historically been in conflict and even mutually exclusive. In our opinion, there are domains that demonstrate the synergy of these two trends.

This Special Issue is dedicated to the development and use of intelligent agents in various applications; therefore, potential topics include, but are not limited to, the following:

  • Creation of intelligent agents;
  • Agent-oriented architectures;
  • Development of multi-agent systems;
  • Agent-oriented technologies for solving tasks in various domains;
  • Agent-oriented cyber–physical systems;
  • Agent-oriented Internet of Things systems;
  • Distributed, parallel, and cloud computing.

Prof. Dr. Stanimir Stoyanov
Dr. Asya Stoyanova Doycheva
Dr. Veneta Tabakova-Komsalova
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent agents
  • multi-agent systems
  • agent-oriented architectures
  • agent-oriented technologies
  • Large Language Models (LLMs) and Small Language Models (SLMs)
  • Chain of Thought (CoT)
  • big data
  • knowledge-based systems
  • ontologies
  • symbolic, sub-symbolic, and integrated artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

43 pages, 1927 KB  
Article
A Large-Scale Empirical Study of LLM Orchestration and Ensemble Strategies for Sentiment Analysis in Recommender Systems
by Konstantinos I. Roumeliotis, Dionisis Margaris, Dimitris Spiliotopoulos and Costas Vassilakis
Future Internet 2026, 18(2), 112; https://doi.org/10.3390/fi18020112 - 20 Feb 2026
Viewed by 917
Abstract
This paper presents a comprehensive empirical evaluation comparing meta-model aggregation strategies with traditional ensemble methods and standalone models for sentiment analysis in recommender systems beyond standalone large language model (LLM) performance. We investigate whether aggregating multiple LLMs through a reasoning-based meta-model provides measurable [...] Read more.
This paper presents a comprehensive empirical evaluation comparing meta-model aggregation strategies with traditional ensemble methods and standalone models for sentiment analysis in recommender systems beyond standalone large language model (LLM) performance. We investigate whether aggregating multiple LLMs through a reasoning-based meta-model provides measurable performance advantages over individual models and standard statistical aggregation approaches in zero-shot sentiment classification. Using a balanced dataset of 5000 verified Amazon purchase reviews (1000 reviews per rating category from 1 to 5 stars, sampled via two-stage stratified sampling across five product categories), we evaluate 12 different leading pre-trained LLMs from four major providers (OpenAI, Anthropic, Google, and DeepSeek) in both standalone and meta-model configurations. Our experimental design systematically compares individual model performance against GPT-based meta-model aggregation and traditional ensemble baselines (majority voting, mean aggregation). Results show statistically significant improvements (McNemar’s test, p < 0.001): the GPT-5 meta-model achieves 71.40% accuracy (10.15 percentage point improvement over the 61.25% individual model average), while the GPT-5 mini meta-model reaches 70.32% (9.07 percentage point improvement). These observed improvements surpass traditional ensemble methods (majority voting: 62.64%; mean aggregation: 62.96%), suggesting potential value in meta-model aggregation for sentiment analysis tasks. Our analysis reveals empirical patterns including neutral sentiment classification challenges (3-star ratings show 64.83% failure rates across models), model influence hierarchies, and cost-accuracy trade-offs ($130.45 aggregation cost vs. $0.24–$43.97 for individual models per 5000 predictions). This work provides evidence-based insights into the comparative effectiveness of LLM aggregation strategies in recommender systems, demonstrating that meta-model aggregation with natural language reasoning capabilities achieves measurable performance gains beyond statistical aggregation alone. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

20 pages, 682 KB  
Article
Semantic Search for System Dynamics Models Using Vector Embeddings in a Cloud Microservices Environment
by Pavel Kyurkchiev, Anton Iliev and Nikolay Kyurkchiev
Future Internet 2026, 18(2), 86; https://doi.org/10.3390/fi18020086 - 5 Feb 2026
Viewed by 602
Abstract
Efficient retrieval of mathematical and structural similarities in System Dynamics models remains a significant challenge for traditional lexical systems, which often fail to capture the contextual dependencies of simulation processes. This paper presents an architectural approach and implementation of a semantic search module [...] Read more.
Efficient retrieval of mathematical and structural similarities in System Dynamics models remains a significant challenge for traditional lexical systems, which often fail to capture the contextual dependencies of simulation processes. This paper presents an architectural approach and implementation of a semantic search module integrated into an existing cloud-based modeling and simulation system. The proposed method employs a strategy for serializing graph structures into textual descriptions, followed by the generation of vector embeddings via local ONNX inference and indexing within a vector database (Qdrant). Experimental validation performed on a diverse corpus of complex dynamic models, compares the proposed approach against traditional information retrieval methods (Full-Text Search, Keyword Search in PostgreSQL, and Apache Lucene with Standard and BM25 scoring). The results demonstrate the distinct advantage of semantic search, achieving high precision (over 90%) within the scope of the evaluated corpus and effectively eliminating information noise. In comparison, keyword search exhibited only 24.8% precision with a significant rate of false positives, while standard full-text analysis failed to identify relevant models for complex conceptual queries (0 results). Despite a recorded increase in latency (~2 s), the study proves that the vector-based approach is a significantly more robust solution for detecting hidden semantic connections in mathematical model databases, providing a foundation for future developments toward multi-vector indexing strategies. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

29 pages, 5294 KB  
Article
Building a Regional Platform for Monitoring Air Quality
by Stanimir Nedyalkov Stoyanov, Boyan Lyubomirov Belichev, Veneta Veselinova Tabakova-Komsalova, Yordan Georgiev Todorov, Angel Atanasov Golev, Georgi Kostadinov Maglizhanov, Ivan Stanimirov Stoyanov and Asya Georgieva Stoyanova-Doycheva
Future Internet 2026, 18(2), 78; https://doi.org/10.3390/fi18020078 - 2 Feb 2026
Viewed by 414
Abstract
This paper presents PLAM (Plovdiv Air Monitoring)—a regional multi-agent platform for air quality monitoring, semantic reasoning, and forecasting. The platform uses a hybrid architecture that combines two types of intelligent agents: classic BDI (Belief-Desire-Intention) agents for complex, goal-oriented behavior and planning, and ReAct [...] Read more.
This paper presents PLAM (Plovdiv Air Monitoring)—a regional multi-agent platform for air quality monitoring, semantic reasoning, and forecasting. The platform uses a hybrid architecture that combines two types of intelligent agents: classic BDI (Belief-Desire-Intention) agents for complex, goal-oriented behavior and planning, and ReAct agents based on large language models (LLM) for quick response, analysis, and interaction with users. The system integrates data from heterogeneous sources, including local IoT sensor networks and public external services, enriching it with a specialized OWL ontology of environmental norms. Based on this data, the platform performs comparative analysis, detection of anomalies and inconsistencies between measurements, as well as predictions using machine learning models. The results are visualized and presented to users via a web interface and mobile application, including personalized alerts and recommendations. The architecture demonstrates essential properties of an intelligent agent such as autonomy, proactivity, reactivity, and social capabilities. The implementation and testing in the city of Plovdiv demonstrate the system’s ability to provide a more objective and comprehensive assessment of air quality, revealing significant differences between measurements from different institutions. The platform offers a modular and adaptive design, making it applicable to other regions, and outlines future development directions, such as creating a specialized small language model and expanding sensor capabilities. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

32 pages, 2836 KB  
Article
Towards Trustworthy AI Agents in Geriatric Medicine: A Secure and Assistive Architectural Blueprint
by Elena-Anca Paraschiv, Adrian Victor Vevera, Carmen Elena Cîrnu, Lidia Băjenaru, Andreea Dinu and Gabriel Ioan Prada
Future Internet 2026, 18(2), 75; https://doi.org/10.3390/fi18020075 - 1 Feb 2026
Viewed by 1126
Abstract
As artificial intelligence (AI) continues to expand across clinical environments, healthcare is transitioning from static decision-support tools to dynamic, autonomous agents capable of reasoning, coordination, and continuous interaction. In the context of geriatric medicine, a field characterized by multimorbidity, cognitive decline, and the [...] Read more.
As artificial intelligence (AI) continues to expand across clinical environments, healthcare is transitioning from static decision-support tools to dynamic, autonomous agents capable of reasoning, coordination, and continuous interaction. In the context of geriatric medicine, a field characterized by multimorbidity, cognitive decline, and the need for long-term personalized care, this evolution opens new frontiers for delivering adaptive, assistive, and trustworthy digital support. However, the autonomy and interconnectivity of these systems introduce heightened cybersecurity and ethical challenges. This paper presents a Secure Agentic AI Architecture (SAAA) tailored to the unique demands of geriatric healthcare. The architecture is designed around seven layers, grouped into five functional domains (cognitive, coordination, security, oversight, governance) to ensure modularity, interoperability, explainability, and robust protection of sensitive health data. A review of current AI agent implementations highlights limitations in security, transparency, and regulatory alignment, especially in multi-agent clinical settings. The proposed framework is illustrated through a practical use case involving home-based care for elderly patients with chronic conditions, where AI agents manage medication adherence, monitor vital signs, and support clinician communication. The architecture’s flexibility is further demonstrated through its application in perioperative care coordination, underscoring its potential across diverse clinical domains. By embedding trust, accountability, and security into the design of agentic systems, this approach aims to advance the safe and ethical integration of AI into aging-focused healthcare environments. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

30 pages, 965 KB  
Article
Guarded Swarms: Building Trusted Autonomy Through Digital Intelligence and Physical Safeguards
by Uwe M. Borghoff, Paolo Bottoni and Remo Pareschi
Future Internet 2026, 18(1), 64; https://doi.org/10.3390/fi18010064 - 21 Jan 2026
Viewed by 635
Abstract
Autonomous UAV/UGV swarms increasingly operate in contested environments where purely digital control architectures are vulnerable to cyber compromise, communication denial, and timing faults. This paper presents Guarded Swarms, a hybrid framework that combines digital coordination with hardware-level analog safety enforcement. The architecture builds [...] Read more.
Autonomous UAV/UGV swarms increasingly operate in contested environments where purely digital control architectures are vulnerable to cyber compromise, communication denial, and timing faults. This paper presents Guarded Swarms, a hybrid framework that combines digital coordination with hardware-level analog safety enforcement. The architecture builds on Topic-Based Communication Space Petri Nets (TB-CSPN) for structured multi-agent coordination, extending this digital foundation with independent analog guard channels—thrust clamps, attitude limiters, proximity sensors, and emergency stops—that operate in parallel at the actuator interface. Each channel can unilaterally veto unsafe commands within microseconds, independently of software state. The digital–analog interface is formalized via timing contracts that specify sensor-consistency windows and actuation latency bounds. A two-robot case study demonstrates token-based arbitration at the digital level and OR-style inhibition at the analog level. The framework ensures local safety deterministically while maintaining global coordination as a best-effort property. This paper presents an architectural contribution establishing design principles and interface contracts. Empirical validation remains future work. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

18 pages, 418 KB  
Article
AnonymAI: An Approach with Differential Privacy and Intelligent Agents for the Automated Anonymization of Sensitive Data
by Marcelo Nascimento Oliveira Soares, Leonardo Barbosa Oliveira, Antonio João Gonçalves Azambuja, Jean Phelipe de Oliveira Lima and Anderson Silva Soares
Future Internet 2026, 18(1), 41; https://doi.org/10.3390/fi18010041 - 9 Jan 2026
Viewed by 993
Abstract
Data governance for responsible AI systems remains challenged by the lack of automated tools that can apply robust privacy-preserving techniques without destroying analytical value. We propose AnonymAI, a novel methodological framework that integrates LLM-based intelligent agents, the mathematical guarantees of differential privacy, and [...] Read more.
Data governance for responsible AI systems remains challenged by the lack of automated tools that can apply robust privacy-preserving techniques without destroying analytical value. We propose AnonymAI, a novel methodological framework that integrates LLM-based intelligent agents, the mathematical guarantees of differential privacy, and an automated workflow to generate anonymized datasets for analytical applications. This framework produces data tables with formally verifiable privacy protection, dramatically reducing the need for manual classification and the risk of human error. Focusing on the protection of tabular data containing sensitive personal information, AnonymAI is designed as a generalized, replicable pipeline adaptable to different regulations (e.g., General Data Protection Regulation) and use-case scenarios. The novelty lies in combining the contextual classification capabilities of LLMs with the mathematical rigor of differential privacy, enabling an end-to-end pipeline from raw data to a protected, analysis-ready dataset. The efficiency and formal guarantees of this approach offer significant advantages over conventional anonymization methods, which are often manual, inconsistent, and lack the verifiable protections of differential privacy. Validation studies, covering both controlled experiments on four types of synthetic datasets and broader tests on 19 real-world public tables from various domains, confirmed the applicability of the framework, with the agent-based classifier achieving high overall accuracy in identifying confidential columns. The results demonstrate that the protected data maintains high value for statistical analysis and machine learning models, highlighting AnonymAI’s potential to advance responsible data sharing. This work paves the way for trustworthy and scalable data governance in AI through a rigorously engineered automated anonymization pipeline. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

33 pages, 2522 KB  
Article
Ontology-Driven Multi-Agent System for Cross-Domain Art Translation
by Viktor Matanski, Anton Iliev, Nikolay Kyurkchiev and Todorka Terzieva
Future Internet 2025, 17(11), 517; https://doi.org/10.3390/fi17110517 - 12 Nov 2025
Viewed by 1769
Abstract
Generative models can generate art within a single modality with high fidelity. However, translating a work of art from one domain to another (e.g., painting to music or poem to painting) in a meaningful way remains a longstanding, interdisciplinary challenge. We propose a [...] Read more.
Generative models can generate art within a single modality with high fidelity. However, translating a work of art from one domain to another (e.g., painting to music or poem to painting) in a meaningful way remains a longstanding, interdisciplinary challenge. We propose a novel approach combining a multi-agent system (MAS) architecture with an ontology-guided semantic representation to achieve cross-domain art translation while preserving the original artwork’s meaning and emotional impact. In our concept, specialized agents decompose the task: a Perception Agent extracts symbolic descriptors from the source artwork, a Translation Agent maps these descriptors using shared knowledge base, a Generator Agent creates the target-modality artwork, and a Curator Agent evaluates and refines the output for coherence and style alignment. This modular design, inspired by human creative workflows, allows complex artistic concepts (themes, moods, motifs) to carry over across modalities in a consistent and interpretable way. We implemented a prototype supporting translations between painting and poetry, leveraging state-of-the-art generative models. Preliminary results indicate that our ontology-driven MAS produces cross-domain translations that preserve key semantic elements and affective tone of the input, offering a new path toward explainable and controllable creative AI. Finally, we discuss a case study and potential applications from educational tools to synesthetic VR experiences and outline future research directions for enhancing the realm of intelligent agents. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

14 pages, 1197 KB  
Article
ABMS-Driven Reinforcement Learning for Dynamic Resource Allocation in Mass Casualty Incidents
by Ionuț Murarețu, Alexandra Vultureanu-Albiși, Sorin Ilie and Costin Bădică
Future Internet 2025, 17(11), 502; https://doi.org/10.3390/fi17110502 - 3 Nov 2025
Viewed by 871
Abstract
This paper introduces a novel framework that integrates reinforcement learning with declarative modeling and mathematical optimization for dynamic resource allocation during mass casualty incidents. Our approach leverages Mesa as an agent-based modeling library to develop a flexible and scalable simulation environment as a [...] Read more.
This paper introduces a novel framework that integrates reinforcement learning with declarative modeling and mathematical optimization for dynamic resource allocation during mass casualty incidents. Our approach leverages Mesa as an agent-based modeling library to develop a flexible and scalable simulation environment as a decision support system for emergency response. This paper addresses the challenge of efficiently allocating casualties to hospitals by combining mixed-integer linear and constraint programming while enabling a central decision-making component to adapt allocation strategies based on experience. The two-layer architecture ensures that casualty-to-hospital assignments satisfy geographical and medical constraints while optimizing resource usage. The reinforcement learning component receives feedback through agent-based simulation outcomes, using survival rates as the reward signal to guide future allocation decisions. Our experimental evaluation, using simulated emergency scenarios, shows a significant improvement in survival rates compared to traditional optimization approaches. The results indicate that the hybrid approach successfully combines the robustness of declarative modeling and the adaptability required for smart decision making in complex and dynamic emergency scenarios. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

19 pages, 9954 KB  
Article
Improved Generation of Drawing Sequences Using Variational and Skip-Connected Deep Networks for a Drawing Support System
by Atomu Nakamura, Homari Matsumoto, Koharu Chiba and Shun Nishide
Future Internet 2025, 17(9), 413; https://doi.org/10.3390/fi17090413 - 10 Sep 2025
Viewed by 878
Abstract
This study presents a deep generative model designed to predict intermediate stages in the drawing process of character illustrations. To enhance generalization and robustness, the model integrates a variational bottleneck based on the Variational Autoencoder (VAE) and employs Gaussian noise augmentation during training. [...] Read more.
This study presents a deep generative model designed to predict intermediate stages in the drawing process of character illustrations. To enhance generalization and robustness, the model integrates a variational bottleneck based on the Variational Autoencoder (VAE) and employs Gaussian noise augmentation during training. We also investigate the effect of U-Net-style skip connections, which allow for the direct propagation of low-level features, on autoregressive sequence generation. Comparative experiments with baseline models demonstrate that the proposed VAE with noise augmentation outperforms both CNN- and RNN-based baselines in long-term stability and visual fidelity. While skip connections improve local detail retention, they also introduce instability in extended sequences, suggesting a trade-off between spatial precision and temporal coherence. The findings highlight the advantages of probabilistic modeling and data augmentation for sequential image generation and provide practical insights for designing intelligent drawing support systems. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

27 pages, 1120 KB  
Article
Beyond Prompt Chaining: The TB-CSPN Architecture for Agentic AI
by Uwe M. Borghoff, Paolo Bottoni and Remo Pareschi
Future Internet 2025, 17(8), 363; https://doi.org/10.3390/fi17080363 - 8 Aug 2025
Cited by 3 | Viewed by 2732
Abstract
Current agentic AI frameworks such as LangGraph and AutoGen simulate autonomy via sequential prompt chaining but lack true multi-agent coordination architectures. These systems conflate semantic reasoning with orchestration, requiring LLMs at every coordination step and limiting scalability. By contrast, TB-CSPN (Topic-Based Communication Space [...] Read more.
Current agentic AI frameworks such as LangGraph and AutoGen simulate autonomy via sequential prompt chaining but lack true multi-agent coordination architectures. These systems conflate semantic reasoning with orchestration, requiring LLMs at every coordination step and limiting scalability. By contrast, TB-CSPN (Topic-Based Communication Space Petri Net) is a hybrid formal architecture that fundamentally separates semantic processing from coordination logic. Unlike traditional Petri net applications, where the entire system state is encoded within the network structure, TB-CSPN uses Petri nets exclusively for coordination workflow modeling, letting communication and interaction between agents drive semantically rich, topic-based representations. At the same time, unlike first-generation agentic frameworks, here LLMs are confined to topic extraction, with business logic coordination implemented by structured token communication. This hybrid architectural separation preserves human strategic oversight (as supervisors) while delegating consultant and worker roles to LLMs and specialized AI agents, avoiding the state-space explosion typical of monolithic formal systems. Our empirical evaluation shows that TB-CSPN achieves 62.5% faster processing, 66.7% fewer LLM API calls, and 167% higher throughput compared to LangGraph-style orchestration, without sacrificing reliability. Scaling experiments with 10–100 agents reveal sub-linear memory growth (10× efficiency improvement), directly contradicting traditional Petri Net scalability concerns through our semantic-coordination-based architectural separation. These performance gains arise from the hybrid design, where coordination patterns remain constant while semantic spaces scale independently. TB-CSPN demonstrates that efficient agentic AI emerges not by over-relying on modern AI components but by embedding them strategically within a hybrid architecture that combines formal coordination guarantees with semantic flexibility. Our implementation and evaluation methodology are openly available, inviting community validation and extension of these principles. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

27 pages, 2572 KB  
Article
Parallel Agent-Based Framework for Analyzing Urban Agricultural Supply Chains
by Manuel Ignacio Manríquez, Veronica Gil-Costa and Mauricio Marin
Future Internet 2025, 17(7), 316; https://doi.org/10.3390/fi17070316 - 19 Jul 2025
Viewed by 826
Abstract
This work presents a parallel agent-based framework designed to analyze the dynamics of vegetable trade within a metropolitan area. The system integrates agent-based and discrete event techniques to capture the complex interactions among farmers, vendors, and consumers in urban agricultural supply chains. Decision-making [...] Read more.
This work presents a parallel agent-based framework designed to analyze the dynamics of vegetable trade within a metropolitan area. The system integrates agent-based and discrete event techniques to capture the complex interactions among farmers, vendors, and consumers in urban agricultural supply chains. Decision-making processes are modeled in detail: farmers select crops based on market trends and environmental risks, while vendors and consumers adapt their purchasing behavior according to seasonality, prices, and availability. To efficiently handle the computational demands of large-scale scenarios, we adopt an optimistic approximate parallel execution strategy. Furthermore, we introduce a credit-based load balancing mechanism that mitigates the effects of heterogeneous communication patterns and improves scalability. This framework enables detailed analysis of food distribution systems in urban contexts, offering insights relevant to smart cities and digital agriculture initiatives. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

28 pages, 8659 KB  
Article
A Regional Multi-Agent Air Monitoring Platform
by Stanimir Stoyanov, Emil Doychev, Asya Stoyanova-Doycheva, Veneta Tabakova-Komsalova, Ivan Stoyanov and Iliya Nedelchev
Future Internet 2025, 17(3), 112; https://doi.org/10.3390/fi17030112 - 3 Mar 2025
Cited by 2 | Viewed by 1978
Abstract
Plovdiv faces significant air pollution challenges due to geographic, climatic, and industrial factors, making accurate air quality assessment critical. This study presents a hybrid multi-agent platform that integrates symbolic and sub-symbolic artificial intelligence to improve the reliability of air quality monitoring. The platform [...] Read more.
Plovdiv faces significant air pollution challenges due to geographic, climatic, and industrial factors, making accurate air quality assessment critical. This study presents a hybrid multi-agent platform that integrates symbolic and sub-symbolic artificial intelligence to improve the reliability of air quality monitoring. The platform features a BDI agent, developed using JaCaMo, for processing real-time sensor measurements and a ReAct agent, implemented with LangChain, to incorporate external data sources and perform advanced analytics. By combining these AI approaches, the platform enhances data integration, detects anomalies, and resolves discrepancies between conflicting air quality reports. Furthermore, its scalable and adaptable architecture lays the foundation for future advancements in environmental monitoring. This research represents the first stage in developing an AI-powered system that supports more objective and data-driven decision-making for air quality management in Plovdiv. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

Back to TopTop