Topic Editors

Prof. Dr. Bolong Zheng
School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan 430070, China
Dr. Qing Xie
School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan 430070, China
Dr. You Li
School of Management, Wuhan University of Technology, Wuhan 430070, China

Advanced Development and Applications of AI-Generated Content (AIGC)

Abstract submission deadline
31 October 2026
Manuscript submission deadline
31 December 2026
Viewed by
9674

Topic Information

Dear Colleagues,

The rapid evolution of Artificial Intelligence Generated Content (AIGC), particularly driven by large-scale multimodal models, is transforming the way we create, interact with, and consume digital content. From technical architectures to user-centric applications, AIGC has shown immense potential across a wide range of domains—such as automated media production, smart design tools, Artificial Intelligence (AI)-powered content platforms, and even digital behavioral interactions. This Topic seeks to explore both the technical foundations and societal implications of AIGC. We aim to collect contributions that not only propose novel algorithms, models, and systems, but also analyze how these technologies impact human behavior, communication, and digital culture. We welcome submissions from various perspectives, including engineering, computer science, information systems, behavioral sciences, and interdisciplinary studies.

Topics of interest include, but are not limited to the following:

  • New architectures and frameworks for AIGC systems;
  • Multimodal content generation: text, image, audio, video, and 3D;
  • Development and deployment of large-scale multimodal foundation models;
  • Human-AI collaboration and user perception in AIGC environments;
  • AIGC applications in engineering, media, education, and intelligent systems;
  • Behavioral and psychological impacts of interacting with AI-generated content;
  • Information dissemination, trust, and ethics in AI-powered content ecosystems;
  • Privacy, security, and copyright challenges in AIGC.

We particularly encourage work that bridges theory and practice, technology and society, and algorithm and application. Researchers and practitioners from academia, industry, and public sectors are warmly invited to contribute.

Prof. Dr. Bolong Zheng
Dr. Qing Xie
Dr. You Li
Topic Editors

Keywords

  • AI-generated content (AIGC)
  • multimodal large model
  • human–AI interaction
  • digital creativity and content generation
  • ethics and societal impact of AI

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
5.0 6.9 2020 19.2 Days CHF 1800 Submit
Applied Sciences
applsci
2.5 5.5 2011 16 Days CHF 2400 Submit
Big Data and Cognitive Computing
BDCC
4.4 9.8 2017 23.1 Days CHF 1800 Submit
Electronics
electronics
2.6 6.1 2012 16.4 Days CHF 2400 Submit
Information
information
2.9 6.5 2010 20.9 Days CHF 1800 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (5 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 1536 KB  
Article
Detection of LLM-Generated Text vs. Human Text via DeepSeek-R1 Multi-Feature Fusion
by Xuan Bu, Minghu Tang, Junjie Wang, Jiayi Zhang and Peng Luo
Information 2026, 17(4), 320; https://doi.org/10.3390/info17040320 - 25 Mar 2026
Viewed by 383
Abstract
The development of generative artificial intelligence technology has brought convenience to various industries, but also caused some confusion. Especially today, when the content generated by large language models is extremely similar to real text, it has created challenges in many fields (such as [...] Read more.
The development of generative artificial intelligence technology has brought convenience to various industries, but also caused some confusion. Especially today, when the content generated by large language models is extremely similar to real text, it has created challenges in many fields (such as in the discrimination of graduation theses in schools) in quickly identifying whether a text is from human sources or generated by large language models. Based on the DeepSeek-R1 language model, this paper combines natural language features and uses a judgment mechanism to detect text generated by large language models. Experimental results show that its accuracy is improved compared with conventional methods in the Reuters, WP and HC3 datasets. Full article
Show Figures

Figure 1

21 pages, 4290 KB  
Article
Information Modeling of Asymmetric Aesthetics Using DCGAN: A Data-Driven Approach to the Generation of Marbling Art
by Muhammed Fahri Unlersen and Hatice Unlersen
Information 2026, 17(1), 94; https://doi.org/10.3390/info17010094 - 15 Jan 2026
Viewed by 670
Abstract
Traditional Turkish marbling (Ebru) art is an intangible cultural heritage characterized by highly asymmetric, fluid, and non-reproducible patterns, making its long-term preservation and large-scale dissemination challenging. It is highly sensitive to environmental conditions, making it enormously difficult to mass produce while maintaining its [...] Read more.
Traditional Turkish marbling (Ebru) art is an intangible cultural heritage characterized by highly asymmetric, fluid, and non-reproducible patterns, making its long-term preservation and large-scale dissemination challenging. It is highly sensitive to environmental conditions, making it enormously difficult to mass produce while maintaining its original aesthetic qualities. A data-driven generative model is therefore required to create unlimited, high-fidelity digital surrogates that safeguard this UNESCO heritage against physical loss and enable large-scale cultural applications. This study introduces a deep generative modeling framework for the digital reconstruction of traditional Turkish marbling (Ebru) art using a Deep Convolutional Generative Adversarial Network (DCGAN). A dataset of 20,400 image patches, systematically derived from 17 original marbling works, was used to train the proposed model. The framework aims to mathematically capture the asymmetric, fluid, and stochastic nature of Ebru patterns, enabling the reproduction of their aesthetic structure in a digital medium. The generated images were evaluated using multiple quantitative and perceptual metrics, including Fréchet Inception Distance (FID), Kernel Inception Distance (KID), Learned Perceptual Image Patch Similarity (LPIPS), and PRDC-based indicators (Precision, Recall, Density, Coverage). For experimental validation, the proposed DCGAN framework is additionally compared against a Vanilla GAN baseline trained under identical conditions, highlighting the advantages of convolutional architectures for modeling marbling textures. The results show that the DCGAN model achieved a high level of realism and diversity without mode collapse or overfitting, producing images that were perceptually close to authentic marbling works. In addition to the quantitative evaluation, expert qualitative assessment by a traditional Ebru artist confirmed that the model reproduced the organic textures, color dynamics, and compositional asymmetrical characteristic of real marbling art. The proposed approach demonstrates the potential of deep generative models for the digital preservation, dissemination, and reinterpretation of intangible cultural heritage recognized by UNESCO. Full article
Show Figures

Graphical abstract

32 pages, 63015 KB  
Article
Can AI See the Unseen? Measuring the Perception Gap for Tibetan Cultural Symbols in AI-Generated Art
by Yuhan Liu, Yiran Qiao, Anshu Hu, Yongjian Liu and Lihua Bai
Electronics 2026, 15(1), 15; https://doi.org/10.3390/electronics15010015 - 19 Dec 2025
Viewed by 974
Abstract
Bias and hallucinations in low-resource cultural artefacts significantly impede text-to-image generation models from understanding and disseminating. Focusing on Tibetan as a Chinese minority culture, we produced a children’s picture book through two methods: AI generation and human illustrator. Eye-tracking experiments were employed to [...] Read more.
Bias and hallucinations in low-resource cultural artefacts significantly impede text-to-image generation models from understanding and disseminating. Focusing on Tibetan as a Chinese minority culture, we produced a children’s picture book through two methods: AI generation and human illustrator. Eye-tracking experiments were employed to investigate participants’ implicit attitudes, aesthetic biases, and cultural perceptions towards these two sources. The results revealed that (1) the hand-drawn group demonstrated higher fidelity to Tibetan culture, exhibiting a positive aesthetic calibration effect in terms of cultural adaptability owing to viewers’ attention duration to the cultural symbols details. (2) The AI-generated group elicited greater viewer interest and emotional engagement through its asymmetric color palettes, especially in color richness and stylistic rendering, and achieved professional-level compositional maturity in multi-character scene generation. This study provides empirical evidence to inform the division of labor between humans and AI in children’s book illustration and explores potential models for future human-AI collaboration. Full article
Show Figures

Figure 1

18 pages, 3027 KB  
Article
Domain-Specialized Large Language Model for Corrosion Analysis: Construction and Evaluation of Corr-Lora-RAG
by Weitong Wu, Di Xu, Liangan Liu, Bingqin Wang, Yadi Zhao, Xuequn Cheng and Xiaogang Li
Appl. Sci. 2025, 15(16), 9226; https://doi.org/10.3390/app15169226 - 21 Aug 2025
Viewed by 1468
Abstract
This study proposes a large language model, Corr-Lora-RAG, designed to address the complexity and uncertainty inherent in corrosion data. A dedicated corrosion knowledge database (CKD) was constructed, and dataset generation code was provided to enhance the model’s reproducibility and adaptability. Based on the [...] Read more.
This study proposes a large language model, Corr-Lora-RAG, designed to address the complexity and uncertainty inherent in corrosion data. A dedicated corrosion knowledge database (CKD) was constructed, and dataset generation code was provided to enhance the model’s reproducibility and adaptability. Based on the Qwen2.5-7B model, the Corr-Lora model was developed by integrating prompt engineering and low-rank adaptation (LoRA) supervised fine-tuning (SFT) techniques, thereby improving the understanding and expression of domain-specific knowledge in the field of corrosion. Furthermore, the Corr-Lora-RAG model was built using retrieval-augmented generation (RAG) technology, enabling dynamic access to external knowledge. Experimental results demonstrate that the proposed model outperforms baseline models in terms of accuracy, completeness, and domain relevance, and exhibits knowledge generation capabilities comparable to those of large-scale language models under limited computational resources. This approach provides an intelligent solution for corrosion risk assessment, standards compliance analysis, and protective strategy formulation, and offers a valuable reference for the development of specialized language models in other engineering fields. Full article
Show Figures

Figure 1

26 pages, 1501 KB  
Article
A Comparative Performance Analysis of Locally Deployed Large Language Models Through a Retrieval-Augmented Generation Educational Assistant Application for Textual Data Extraction
by Amitabh Mishra and Nagaraju Brahmanapally
AI 2025, 6(6), 119; https://doi.org/10.3390/ai6060119 - 6 Jun 2025
Cited by 1 | Viewed by 4468
Abstract
Background: Rapid advancements in large language models (LLMs) have significantly enhanced Retrieval-Augmented Generation (RAG) techniques, leading to more accurate and context-aware information retrieval systems. Methods: This article presents the creation of a RAG-based chatbot tailored for university course catalogs, aimed at answering queries [...] Read more.
Background: Rapid advancements in large language models (LLMs) have significantly enhanced Retrieval-Augmented Generation (RAG) techniques, leading to more accurate and context-aware information retrieval systems. Methods: This article presents the creation of a RAG-based chatbot tailored for university course catalogs, aimed at answering queries related to course details and other essential academic information, and investigates its performance by testing it on several locally deployed large language models. By leveraging multiple LLM architectures, we evaluate performance of the models under test in terms of context length, embedding size, computational efficiency, and relevance of responses. Results: The experimental analysis obtained by this research, which builds on recent comparative studies, reveals that while larger models achieve higher relevance scores, they incur greater response times than smaller, more efficient models. Conclusions: The findings underscore the importance of balancing accuracy and efficiency for real-time educational applications. Overall, this work contributes to the field by offering insights into optimal RAG configurations and practical guidelines for deploying AI-powered educational assistants. Full article
Show Figures

Figure 1

Back to TopTop