Artificial Intelligence in Graphics and Images

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 October 2025 | Viewed by 3214

Special Issue Editors


E-Mail Website
Guest Editor
Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
Interests: geometric deep learning; graph neural network; image processing; medical image analysis

E-Mail Website
Guest Editor
Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
Interests: graph neural network; federated learning; generative AI

Special Issue Information

Dear Colleagues,

We cordially invite you to submit original research or review articles to Electronics for the Special Issue entitled “Artificial Intelligence in Graphics and Images”.

Artificial intelligence technologies, particularly generative AI, have greatly enhanced graphics and imaging, and are being widely used in real-world applications. Insights from research can guide and further optimize their implementation, driving innovation and business opportunities in these fields. For example, graph neural networks (GNNs) are a recent family of neural network models specifically designed to harness the inherent structure and dependencies present in graph-structured data, revolutionizing the way that we analyze, model, and make predictions in complex networked structures.

This Special Issue aims to gather the latest research in AI advancements for graphics and images, including generative AI for image generation, the application of AI to traditional graphics studies, and compound AI systems for solving complex graphics scenarios in practice.

Potential topics include, but are not limited to, the following:

  • Deep learning on graphs (graph convolutions, graph attention networks, graph autoencoders, and graph spatial–temporal networks);
  • Graph datasets and benchmarks;
  • Compound AI systems for graphics;
  • Computational imaging;
  • Geometric modeling and analysis;
  • Graphics applications and systems;
  • Image generation;
  • Learning-based vision;
  • Medical and biological vision, cell microscopy;
  • Multimodal learning;
  • Optimization methods;
  • Physics-based vision;
  • Robustness, explainability, and fairness;
  • Video generation;
  • Vision + graphics;
  • Visual learning and recognition.

Dr. Tingting Dan
Dr. Yuhang Yao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • generative AI
  • multimodal learning
  • computational imaging
  • computational graphics
  • artificial intelligence
  • deep learning
  • machine learning system
  • graph neural network

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 5441 KiB  
Article
High-Dimensional Attention Generative Adversarial Network Framework for Underwater Image Enhancement
by Shasha Tian, Adisorn Sirikham, Jessada Konpang and Chuyang Wang
Electronics 2025, 14(6), 1203; https://doi.org/10.3390/electronics14061203 - 19 Mar 2025
Viewed by 247
Abstract
In recent years, underwater image enhancement (UIE) processing technology has developed rapidly, and underwater optical imaging technology has shown great advantages in the intelligent operation of underwater robots. In underwater environments, light absorption and scattering often cause seabed images to be blurry and [...] Read more.
In recent years, underwater image enhancement (UIE) processing technology has developed rapidly, and underwater optical imaging technology has shown great advantages in the intelligent operation of underwater robots. In underwater environments, light absorption and scattering often cause seabed images to be blurry and distorted in color. Therefore, acquiring high-definition underwater imagery with superior quality holds essential significance for advancing the exploration and development of marine resources. In order to resolve the problems associated with chromatic aberration, insufficient exposure, and blurring in underwater images, a high-dimensional attention generative adversarial network framework for underwater image enhancement (HDAGAN) is proposed. The introduced method is composed of a generator and a discriminator. The generator comprises an encoder and a decoder. In the encoder, a channel attention residual module (CARM) is designed to capture both semantic features and contextual details from visual data, incorporating multi-scale feature extraction layers and multi-scale feature fusion layers. Furthermore, in the decoder, to refine the feature representation of latent vectors for detail recovery, a strengthen–operate–subtract module (SOSM) is introduced to strengthen the model’s capability to comprehend the picture’s geometric structure and semantic information. Additionally, in the discriminator, a multi-scale feature discrimination module (MFDM) is proposed, which aids in achieving more precise discrimination. Experimental findings demonstrate that the novel approach significantly outperforms state-of-the-art UIE techniques, delivering enhanced outcomes with higher visual appeal. Full article
(This article belongs to the Special Issue Artificial Intelligence in Graphics and Images)
Show Figures

Figure 1

16 pages, 716 KiB  
Article
Efficient Graph Representation Learning by Non-Local Information Exchange
by Ziquan Wei, Tingting Dan, Jiaqi Ding and Guorong Wu
Electronics 2025, 14(5), 1047; https://doi.org/10.3390/electronics14051047 - 6 Mar 2025
Viewed by 469
Abstract
Graphs are an effective data structure for characterizing ubiquitous connections as well as evolving behaviors that emerge in inter-wined systems. Limited by the stereotype of node-to-node connections, learning node representations is often confined in a graph diffusion process where local information has been [...] Read more.
Graphs are an effective data structure for characterizing ubiquitous connections as well as evolving behaviors that emerge in inter-wined systems. Limited by the stereotype of node-to-node connections, learning node representations is often confined in a graph diffusion process where local information has been excessively aggregated, as the random walk of graph neural networks (GNN) explores far-reaching neighborhoods layer-by-layer. In this regard, tremendous efforts have been made to alleviate feature over-smoothing issues such that current backbones can lend themselves to be used in a deep network architecture. However, compared to designing a new GNN, less attention has been paid to underlying topology by graph re-wiring, which mitigates not only flaws of the random walk but also the over-smoothing risk incurred by reducing unnecessary diffusion in deep layers. Inspired by the notion of non-local mean techniques in the area of image processing, we propose a non-local information exchange mechanism by establishing an express connection to the distant node, instead of propagating information along the (possibly very long) original pathway node-after-node. Since the process of seeking express connections throughout a graph can be computationally expensive in real-world applications, we propose a re-wiring framework (coined the express messenger wrapper) to progressively incorporate express links in a non-local manner, which allows us to capture multi-scale features without using a very deep model; our approach is thus free of the over-smoothing challenge. We integrate our express messenger wrapper with existing GNN backbones (either using graph convolution or tokenized transformer) and achieve a new record on the Roman-empire dataset as well as in terms of SOTA performance on both homophilous and heterophilous datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence in Graphics and Images)
Show Figures

Figure 1

20 pages, 908 KiB  
Article
Mining Nuanced Weibo Sentiment with Hierarchical Graph Modeling and Self-Supervised Learning
by Chuyang Wang, Jessada Konpang, Adisorn Sirikham and Shasha Tian
Electronics 2025, 14(1), 41; https://doi.org/10.3390/electronics14010041 - 26 Dec 2024
Viewed by 733
Abstract
Weibo sentiment analysis has gained prominence, particularly during the COVID-19 pandemic, as a means to monitor public emotions and detect emerging mental health trends. However, challenges arise from Weibo’s informal language, nuanced expressions, and stylistic features unique to social media, which complicate the [...] Read more.
Weibo sentiment analysis has gained prominence, particularly during the COVID-19 pandemic, as a means to monitor public emotions and detect emerging mental health trends. However, challenges arise from Weibo’s informal language, nuanced expressions, and stylistic features unique to social media, which complicate the accurate interpretation of sentiments. Existing models often fall short, relying on text-based methods that inadequately capture the rich emotional texture of Weibo posts, and are constrained by single loss functions that limit emotional depth. To address these limitations, we propose a novel framework incorporating a sentiment graph and self-supervised learning. Our approach introduces a “sentiment graph” that leverages both word-to-post and post-to-post relational connections, allowing the model to capture fine-grained sentiment cues and context-dependent meanings. Enhanced by a gated mechanism within the graph, our model selectively filters emotional signals based on intensity and relevance, improving its sensitivity to subtle variations such as sarcasm. Additionally, a self-supervised objective enables the model to generalize beyond labeled data, capturing latent emotional structures within the graph. Through this integration of sentiment graph and self-supervised learning, our approach advances Weibo sentiment analysis, offering a robust method for understanding the complex emotional landscape of social media. Full article
(This article belongs to the Special Issue Artificial Intelligence in Graphics and Images)
Show Figures

Figure 1

22 pages, 1696 KiB  
Article
Learning A-Share Stock Recommendation from Stock Graph and Historical Price Simultaneously
by Hanyang Chen, Tian Wang, Jessada Konpang and Adisorn Sirikham
Electronics 2024, 13(22), 4427; https://doi.org/10.3390/electronics13224427 - 12 Nov 2024
Cited by 1 | Viewed by 1074
Abstract
The Chinese stock market, marked by rapid growth and significant volatility, presents unique challenges for investors and analysts. A-share stocks, traded on the Shanghai and Shenzhen exchanges, are crucial to China’s financial system and offer opportunities for both domestic and international investors. Accurate [...] Read more.
The Chinese stock market, marked by rapid growth and significant volatility, presents unique challenges for investors and analysts. A-share stocks, traded on the Shanghai and Shenzhen exchanges, are crucial to China’s financial system and offer opportunities for both domestic and international investors. Accurate stock recommendation tools are vital for informed decision making, especially given the ongoing regulatory changes and economic reforms in China. Current stock recommendation methods often fall short, as they typically fail to capture the complex inter-company relationships and rely heavily on financial reports, neglecting the potential of unlabeled data and historical price trends. In response, we propose a novel approach that combines graph-based structures with historical price data to develop self-learned stock embeddings for A-share recommendations. Our method leverages self-supervised learning, bypassing the need for human-generated labels and autonomously uncovering latent relationships and patterns within the data. This dual-input strategy enhances the understanding of market dynamics, leading to more accurate stock predictions. Our contributions include a novel framework for label-free stock recommendations with modeling stock connections and pricing information, and empirical evidence demonstrating the robustness and adaptability of our approach in the volatile Chinese stock market. Full article
(This article belongs to the Special Issue Artificial Intelligence in Graphics and Images)
Show Figures

Figure 1

Back to TopTop