Fractional-Order Dynamics in AI: Neural Networks and Applications

A special issue of Fractal and Fractional (ISSN 2504-3110). This special issue belongs to the section "Engineering".

Deadline for manuscript submissions: 20 August 2026 | Viewed by 507

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Automation and Computer Science, Department of Automation, Technical University of Cluj-Napoca, Memorandumului 28, 400014 Cluj-Napoca, Romania
Interests: fractional calculus; control engineering; biochemical engineering; biomedical engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent advancements in artificial intelligence (AI) have demonstrated the transformative potential of incorporating complex mathematical frameworks into neural network models. One such framework, fractional-order calculus, offers an enhanced approach for modelling dynamic systems with inherent memory and nonlinearity. Unlike traditional integer-order models, fractional-order systems are capable of capturing the long-term memory effects and intricate dynamics observed in many real-world processes. This characteristic positions fractional-order dynamics as a powerful tool for AI applications, enabling the more accurate and robust modelling of complex, time-varying data.

This Special Issue aims to explore the integration of fractional-order dynamics within the realm of AI, with a particular focus on their applications in neural networks and other machine learning paradigms. We invite contributions that examine the theoretical foundations of fractional calculus in AI, along with its practical applications in areas such as optimization, signal processing, autonomous systems, and natural language processing. From enhancing the learning efficiency of neural networks to improving the modelling of chaotic or time-dependent systems, fractional-order dynamics present unique opportunities for advancing AI technologies.

We welcome original research articles, reviews, and case studies addressing the following topics:

  • The development and application of fractional-order neural networks;
  • AI-driven optimization using fractional-order models;
  • Fractional calculus for memory-based learning systems;
  • Applications of fractional-order dynamics in signal processing and data analysis;
  • Enhancing deep learning models with fractional-order frameworks;
  • Fractional-order models for autonomous and control systems;
  • Advanced applications of fractional dynamics in natural language processing.

We look forward to your innovative contributions and to advancing this exciting intersection of AI and fractional-order dynamics.

Finally, I would like to thank Alexandru-George Berciu for his valuable work and for assisting me with this Special Issue.

Prof. Dr. Eva H. Dulf
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Fractal and Fractional is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • fractional-order calculus
  • artificial intelligence
  • neural networks
  • machine learning
  • nonlinear dynamics
  • memory effects
  • optimisation
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 2125 KB  
Article
FracGrad: A Discretized Riemann–Liouville Fractional Integral Approach to Gradient Accumulation for Deep Learning
by Minhyeok Lee
Fractal Fract. 2025, 9(11), 733; https://doi.org/10.3390/fractalfract9110733 - 13 Nov 2025
Viewed by 367
Abstract
Gradient accumulation enables training large-scale deep learning models under GPU memory constraints by aggregating gradients across multiple microbatches before parameter updates. Standard gradient accumulation treats all microbatches uniformly through simple averaging, implicitly assuming that all stochastic gradient estimates are equally reliable. This assumption [...] Read more.
Gradient accumulation enables training large-scale deep learning models under GPU memory constraints by aggregating gradients across multiple microbatches before parameter updates. Standard gradient accumulation treats all microbatches uniformly through simple averaging, implicitly assuming that all stochastic gradient estimates are equally reliable. This assumption becomes problematic in non-convex optimization where gradient variance across microbatches is high, causing some gradient estimates to be noisy and less representative of the true descent direction. In this paper, FracGrad is proposed, a simple weighting scheme for gradient accumulation that biases toward recent microbatches via a power-law schedule derived from a discretized Riemann–Liouville integral. Unlike uniform summation, FracGrad reweights each microbatch gradient by wi(α)=(Ni+1)α(Ni)αj=1N[(Nj+1)α(Nj)α], controlled by α(0,1]. When α=1, standard accumulation is recovered. In experiments on mini-ImageNet with ResNet-18 using up to N=32 accumulation steps, the best FracGrad variant with α=0.1 improves test accuracy from 16.99% to 31.35% at N=16. Paired t-tests yield p2×106. Full article
(This article belongs to the Special Issue Fractional-Order Dynamics in AI: Neural Networks and Applications)
Show Figures

Figure 1

Back to TopTop