Generative Artificial Intelligence: Systems, Technologies and Applications

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Big Data and Augmented Intelligence".

Deadline for manuscript submissions: 30 April 2026 | Viewed by 4062

Special Issue Editors


E-Mail Website
Guest Editor
Department of Education Sciences in Early Childhood, Democritus University of Thrace, 68100 Alexandroupolis, Greece
Interests: artificial intelligence; knowledge representation; artificial intelligence in education; e-learning; generative artificial intelligence

E-Mail Website
Guest Editor
Prof. Emeritus. Dr., Department of Computer Engineering and Informatics, University of Patras, 26504 Patras, Greece
Interests: artificial intelligence; knowledge representation; intelligent systems; intelligent e-learning; sentiment analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Generative artificial intelligence has become popular during recent years, providing enhanced experiences to users and advancing many fields. Generative AI systems may produce various forms of data such as text, images, audio, and videos, among others. The availability of Web-based systems encompassing generative artificial intelligence provides enhanced experiences to Internet users, enabling them to utilize advanced AI methods. Due to the increasing popularity of generative AI systems, there are many directions for relevant research work in the context of the Internet.

This Special Issue aims to address recent advances in generative AI and how these affect the evolution of the Internet. It welcomes original, unpublished research and review papers concerning all relevant aspects.

Topics of interest include, but are not limited to, the following:

  • New application fields of generative AI;
  • New viewpoints in existing application fields of generative AI;
  • Improving the effectiveness of methods used in generative AI;
  • New AI methods in the context of generative AI systems;
  • Time efficiency and generative AI systems;
  • Improving user interaction with generative AI systems;
  • Security and generative AI systems;
  • Combination of generative AI systems with other AI systems;
  • Explainable AI methods and generative AI;
  • Neural networks and generative AI;
  • Data mining and generative AI;
  • Natural language processing and generative AI.

Dr. Jim Prentzas
Prof. Dr. Ioannis Hatzilygeroudis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • generative artificial intelligence
  • machine learning
  • data mining
  • neural networks
  • natural language processing
  • explainable artificial intelligence
  • combinations of AI methods

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

32 pages, 544 KB  
Article
Explainability, Safety Cues, and Trust in GenAI Advisors: A SEM–ANN Hybrid Study
by Stefanos Balaskas, Ioannis Stamatiou and George Androulakis
Future Internet 2025, 17(12), 566; https://doi.org/10.3390/fi17120566 - 9 Dec 2025
Abstract
“GenAI” assistants are gradually being integrated into daily tasks and learning, but their uptake is no less contingent on perceptions of credibility or safety than on their capabilities per se. The current study hypothesizes and tests its proposed two-road construct consisting of two [...] Read more.
“GenAI” assistants are gradually being integrated into daily tasks and learning, but their uptake is no less contingent on perceptions of credibility or safety than on their capabilities per se. The current study hypothesizes and tests its proposed two-road construct consisting of two interface-level constructs, namely perceived transparency (PT) and perceived safety/guardrails (PSG), influencing “behavioral intention” (BI) both directly and indirectly, via the two socio-cognitive mediators trust in automation (TR) and psychological reactance (RE). Furthermore, we also provide formulations for the evaluative lenses, namely perceived usefulness (PU) and “perceived risk” (PR). Employing survey data with a sample of 365 responses and partial least squares structural equation modeling (PLS-SEM) with bootstrap techniques in SMART-PLS 4, we discovered that PT is the most influential factor in BI, supported by TR, with some contributions from PSG/PU, but none from PR/RE. Mediation testing revealed significant partial mediations, with PT only exhibiting indirect-only mediated relationships via TR, while the other variables are nonsignificant via reactance-driven paths. To uncover non-linearity and non-compensation, a Stage 2 multilayer perceptron was implemented, confirming the SEM ranking, complimented by an importance of variables and sensitivity analysis. In practical terms, the study’s findings support the primacy of explanatory clarity and the importance of clear rules that are rigorously obligatory, with usefulness subordinated to credibility once the latter is achieved. The integration of SEM and ANN improves explanation and prediction, providing valuable insights for policy, managerial, or educational decision-makers about the implementation of GenAI. Full article
Show Figures

Figure 1

19 pages, 4893 KB  
Article
LLMs in Staging: An Orchestrated LLM Workflow for Structured Augmentation with Fact Scoring
by Giuseppe Trimigno, Gianfranco Lombardo, Michele Tomaiuolo, Stefano Cagnoni and Agostino Poggi
Future Internet 2025, 17(12), 535; https://doi.org/10.3390/fi17120535 - 24 Nov 2025
Viewed by 303
Abstract
Retrieval-augmented generation (RAG) enriches prompts with external knowledge, but it often relies on additional infrastructure that may be impractical in resource-constrained or offline settings. In addition, updating the internal knowledge of a language model through retraining is costly and inflexible. To address these [...] Read more.
Retrieval-augmented generation (RAG) enriches prompts with external knowledge, but it often relies on additional infrastructure that may be impractical in resource-constrained or offline settings. In addition, updating the internal knowledge of a language model through retraining is costly and inflexible. To address these limitations, we propose an explainable and structured prompt augmentation pipeline that enhances inputs using pre-trained models and rule-based extractors, without requiring external sources. We describe this approach as an orchestrated LLM workflow: a structured sequence in which lightweight LLM modules assume specialized roles. Specifically, (1) an extractor module identifies factual triples from input prompts by combining dependency parsing with a rule-based extraction algorithm; (2) a scorer module, based on a generic lightweight LLM, evaluates the importance of each triple via its self-attention patterns, leveraging internal beliefs to promote explainability and trustworthy cooperation with the downstream model; (3) a performer module processes the augmented prompt for downstream tasks in supervised fine-tuning or zero-shot settings. Much like in a theater staging, each module operates transparently behind the scenes to support and elevate the performer’s final output. We evaluate this approach across multiple performer architectures (encoder-only, encoder-decoder, and decoder-only) and NLP tasks (multiple-choice QA, open-book QA, and summarization). Our results show that this structured augmentation with scored facts yields consistent improvements compared to baseline prompting: up to a 28.78% accuracy improvement for multiple-choice QA, up to a 9.42% BLEURT improvement for open-book QA, and up to a 18.14% ROUGE-L improvement for summarization. By decoupling knowledge scoring from task execution, our method provides a practical, interpretable, and low-cost alternative to RAG in static or knowledge-limited environments. Full article
Show Figures

Graphical abstract

22 pages, 2537 KB  
Article
GraphRAG-Enhanced Dialogue Engine for Domain-Specific Question Answering: A Case Study on the Civil IoT Taiwan Platform
by Hui-Hung Yu, Wei-Tsun Lin, Chih-Wei Kuan, Chao-Chi Yang and Kuan-Min Liao
Future Internet 2025, 17(9), 414; https://doi.org/10.3390/fi17090414 - 10 Sep 2025
Viewed by 883
Abstract
The proliferation of sensor technology has led to an explosion in data volume, making the retrieval of specific information from large repositories increasingly challenging. While Retrieval-Augmented Generation (RAG) can enhance Large Language Models (LLMs), they often lack precision in specialized domains. Taking the [...] Read more.
The proliferation of sensor technology has led to an explosion in data volume, making the retrieval of specific information from large repositories increasingly challenging. While Retrieval-Augmented Generation (RAG) can enhance Large Language Models (LLMs), they often lack precision in specialized domains. Taking the Civil IoT Taiwan Data Service Platform as a case study, this study addresses this gap by developing a dialogue engine enhanced with a GraphRAG framework, aiming to provide accurate, context-aware responses to user queries. Our method involves constructing a domain-specific knowledge graph by extracting entities (e.g., ‘Dataset’, ‘Agency’) and their relationships from the platform’s documentation. For query processing, the system interprets natural language inputs, identifies corresponding paths within the knowledge graph, and employs a recursive self-reflection mechanism to ensure the final answer aligns with the user’s intent. The final answer transformed into natural language by utilizing the TAIDE (Trustworthy AI Dialogue Engine) model. The implemented framework successfully translates complex, multi-constraint questions into executable graph queries, moving beyond keyword matching to navigate semantic pathways. This results in highly accurate and verifiable answers grounded in the source data. In conclusion, this research validates that applying a GraphRAG-enhanced engine is a robust solution for building intelligent dialogue systems for specialized data platforms, significantly improving the precision and usability of information retrieval and offering a replicable model for other knowledge-intensive domains. Full article
Show Figures

Figure 1

42 pages, 1300 KB  
Article
A Hybrid Human-AI Model for Enhanced Automated Vulnerability Scoring in Modern Vehicle Sensor Systems
by Mohamed Sayed Farghaly, Heba Kamal Aslan and Islam Tharwat Abdel Halim
Future Internet 2025, 17(8), 339; https://doi.org/10.3390/fi17080339 - 28 Jul 2025
Viewed by 1354
Abstract
Modern vehicles are rapidly transforming into interconnected cyber–physical systems that rely on advanced sensor technologies and pervasive connectivity to support autonomous functionality. Yet, despite this evolution, standardized methods for quantifying cybersecurity vulnerabilities across critical automotive components remain scarce. This paper introduces a novel [...] Read more.
Modern vehicles are rapidly transforming into interconnected cyber–physical systems that rely on advanced sensor technologies and pervasive connectivity to support autonomous functionality. Yet, despite this evolution, standardized methods for quantifying cybersecurity vulnerabilities across critical automotive components remain scarce. This paper introduces a novel hybrid model that integrates expert-driven insights with generative AI tools to adapt and extend the Common Vulnerability Scoring System (CVSS) specifically for autonomous vehicle sensor systems. Following a three-phase methodology, the study conducted a systematic review of 16 peer-reviewed sources (2018–2024), applied CVSS version 4.0 scoring to 15 representative attack types, and evaluated four free source generative AI models—ChatGPT, DeepSeek, Gemini, and Copilot—on a dataset of 117 annotated automotive-related vulnerabilities. Expert validation from 10 domain professionals reveals that Light Detection and Ranging (LiDAR) sensors are the most vulnerable (9 distinct attack types), followed by Radio Detection And Ranging (radar) (8) and ultrasonic (6). Network-based attacks dominate (104 of 117 cases), with 92.3% of the dataset exhibiting low attack complexity and 82.9% requiring no user interaction. The most severe attack vectors, as scored by experts using CVSS, include eavesdropping (7.19), Sybil attacks (6.76), and replay attacks (6.35). Evaluation of large language models (LLMs) showed that DeepSeek achieved an F1 score of 99.07% on network-based attacks, while all models struggled with minority classes such as high complexity (e.g., ChatGPT F1 = 0%, Gemini F1 = 15.38%). The findings highlight the potential of integrating expert insight with AI efficiency to deliver more scalable and accurate vulnerability assessments for modern vehicular systems.This study offers actionable insights for vehicle manufacturers and cybersecurity practitioners, aiming to inform strategic efforts to fortify sensor integrity, optimize network resilience, and ultimately enhance the cybersecurity posture of next-generation autonomous vehicles. Full article
Show Figures

Figure 1

Review

Jump to: Research

33 pages, 708 KB  
Review
A Literature Review of Personalized Large Language Models for Email Generation and Automation
by Rodrigo Novelo, Rodrigo Rocha Silva and Jorge Bernardino
Future Internet 2025, 17(12), 536; https://doi.org/10.3390/fi17120536 - 24 Nov 2025
Viewed by 732
Abstract
In 2024, a total of 361 billion emails were sent and received by businesses and consumers each day. Email remains the preferred method of communication for work-related matters, with knowledge workers spending two to five hours a day managing their inboxes. The advent [...] Read more.
In 2024, a total of 361 billion emails were sent and received by businesses and consumers each day. Email remains the preferred method of communication for work-related matters, with knowledge workers spending two to five hours a day managing their inboxes. The advent of Large Language Models (LLMs) has introduced new possibilities for personalized email automation, offering context-aware and stylistically adaptive responses. However, achieving effective personalization introduces technical, ethical, and security challenges. This survey presents a systematic review of 32 papers published between 2021 and 2025, identified using the PRISMA methodology across Google Scholar, IEEE Xplore, and the ACM Digital Library. Our analysis reveals that state-of-the-art email assistants integrate RAG and PEFT with feedback-driven refinement. User-centric interfaces and privacy-aware architectures support these assistants. Nevertheless, these advances also expose systems to new risks such as Trojan plugins and adversarial prompt injections. This highlights the importance of integrated security frameworks. This review provides a structured approach to advancing personalized LLM-based email systems, identifying persistent research gaps in adaptive learning, benchmark development, and ethical design. This work is intended to guide researchers and developers who are looking to create secure, efficient, and human-aligned communication assistants. Full article
Show Figures

Graphical abstract

Back to TopTop