Fractional and Fractal Methods in Biomedical Imaging and Time Series Learning

A special issue of Fractal and Fractional (ISSN 2504-3110). This special issue belongs to the section "Optimization, Big Data, and AI/ML".

Deadline for manuscript submissions: 31 August 2026 | Viewed by 3056

Special Issue Editors


E-Mail Website
Guest Editor
The Alan Turing Institute, British Library, 96 Euston Road, London NW1 2DB, UK
Interests: artificial intelligence; computer vision; healthcare AI; robotics; fractional calculus
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China
Interests: artificial intelligence; machine learning; robotics; fractional calculus

E-Mail Website
Guest Editor Assistant
School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China
Interests: human gait analysis; computer vision; medical robotics

Special Issue Information

Dear Colleagues,

Fractional and fractal methods are increasingly used in biomedical research for their ability to handle model complexity, memory effects, and self-similarity—key traits in physiological signals and biological structures. This Special Issue will focus on recent advances at the intersection of fractional calculus, fractal geometry, and machine learning, particularly in biomedical imaging and time series analysis.

We welcome the submission of original research, reviews, and applied studies that introduce new models, algorithms, or learning frameworks using fractional or fractal approaches. Potential topics include the use of fractional differential equations in disease modeling, fractal descriptors in image analysis, fractional-order neural networks in the analysis of biomedical signals, and fractal-based biomarkers in health monitoring.

Interdisciplinary contributions spanning applied mathematics, computer vision, biomedical engineering, and AI in healthcare are encouraged, while submissions demonstrating a real-world impact through the use of clinical validation, public datasets, or open-source tools are especially valued.

Dr. Ziyang Wang
Prof. Dr. Chengjun Wang
Guest Editors

Dr. Jiabao Li
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Fractal and Fractional is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • fractional calculus
  • medical robotic control
  • biomedical signal processing
  • medical image analysis
  • time series machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 38796 KB  
Article
VG-SAM: Visual In-Context Guided SAM for Universal Medical Image Segmentation
by Gang Dai, Qingfeng Wang, Yutao Qin, Gang Wei and Shuangping Huang
Fractal Fract. 2025, 9(11), 722; https://doi.org/10.3390/fractalfract9110722 (registering DOI) - 8 Nov 2025
Abstract
Medical image segmentation, driven by the intrinsic fractal characteristics of biological patterns, plays a crucial role in medical image analysis. Recently, universal image segmentation, which aims to build models that generalize robustly to unseen anatomical structures and imaging modalities, has emerged as a [...] Read more.
Medical image segmentation, driven by the intrinsic fractal characteristics of biological patterns, plays a crucial role in medical image analysis. Recently, universal image segmentation, which aims to build models that generalize robustly to unseen anatomical structures and imaging modalities, has emerged as a promising research direction. To achieve this, previous solutions typically follow the in-context learning (ICL) framework, leveraging segmentation priors from a few labeled in-context references to improve prediction performance on out-of-distribution samples. However, these ICL-based methods often overlook the quality of the in-context set and struggle with capturing intricate anatomical details, thus limiting their segmentation accuracy. To address these issues, we propose VG-SAM, which employs a multi-scale in-context retrieval phase and a visual in-context guided segmentation phase. Specifically, inspired by the hierarchical and self-similar properties in fractal structures, we introduce a multi-level feature similarity strategy to select in-context samples that closely match the query image, thereby ensuring the quality of the in-context samples. In the segmentation phase, we propose to generate multi-granularity visual prompts based on the high-quality priors from the selected in-context set. Following this, these visual prompts, along with the semantic guidance signal derived from the in-context set, are seamlessly integrated into an adaptive fusion module, which effectively guides the Segment Anything Model (SAM) with powerful segmentation capabilities to achieve accurate predictions on out-of-distribution query images. Extensive experiments across multiple datasets demonstrate the effectiveness and superiority of our VG-SAM over the state-of-the-art (SOTA) methods. Notably, under the challenging one-shot reference setting, our VG-SAM surpasses SOTA methods by an average of 6.61% in DSC across all datasets. Full article
29 pages, 8202 KB  
Article
Continuous Lower-Limb Joint Angle Prediction Under Body Weight-Supported Training Using AWDF Model
by Li Jin, Liuyi Ling, Zhipeng Yu, Liyu Wei and Yiming Liu
Fractal Fract. 2025, 9(10), 655; https://doi.org/10.3390/fractalfract9100655 - 11 Oct 2025
Viewed by 531
Abstract
Exoskeleton-assisted bodyweight support training (BWST) has demonstrated enhanced neurorehabilitation outcomes in which joint motion prediction serves as the critical foundation for adaptive human–machine interactive control. However, joint angle prediction under dynamic unloading conditions remains unexplored. This study introduces an adaptive wavelet-denoising fusion (AWDF) [...] Read more.
Exoskeleton-assisted bodyweight support training (BWST) has demonstrated enhanced neurorehabilitation outcomes in which joint motion prediction serves as the critical foundation for adaptive human–machine interactive control. However, joint angle prediction under dynamic unloading conditions remains unexplored. This study introduces an adaptive wavelet-denoising fusion (AWDF) model to predict lower-limb joint angles during BWST. Utilizing a custom human-tracking bodyweight support system, time series data of surface electromyography (sEMG), and inertial measurement unit (IMU) from ten adults were collected across graded bodyweight support levels (BWSLs) ranging from 0% to 40%. Systematic comparative experiments evaluated joint angle prediction performance among five models: the sEMG-based model, kinematic fusion model, wavelet-enhanced fusion model, late fusion model, and the proposed AWDF model, tested across prediction time horizons of 30–150 ms and BWSL gradients. Experimental results demonstrate that increasing BWSLs prolonged gait cycle duration and modified muscle activation patterns, with a concomitant decrease in the fractal dimension of sEMG signals. Extended prediction time degraded joint angle estimation accuracy, with 90 ms identified as the optimal tradeoff between system latency and prediction advancement. Crucially, this study reveals an enhancement in prediction performance with increased BWSLs. The proposed AWDF model demonstrated robust cross-condition adaptability for hip and knee angle prediction, achieving average root mean square errors (RMSE) of 1.468° and 2.626°, Pearson correlation coefficients (CC) of 0.983 and 0.973, and adjusted R2 values of 0.992 and 0.986, respectively. This work establishes the first computational framework for BWSL-adaptive joint prediction, advancing human–machine interaction in exoskeleton-assisted neurorehabilitation. Full article
Show Figures

Figure 1

26 pages, 1825 KB  
Article
Deep Brain Tumor Lesion Classification Network: A Hybrid Method Optimizing ResNet50 and EfficientNetB0 for Enhanced Feature Extraction
by Jing Lin, Longhua Huang, Liming Ding and Shen Yan
Fractal Fract. 2025, 9(9), 614; https://doi.org/10.3390/fractalfract9090614 - 22 Sep 2025
Viewed by 735
Abstract
Brain tumors usually appear as masses formed by localized abnormal cell proliferation. Although complete removal of tumors is an ideal treatment goal, this process faces many challenges due to the aggressive nature of malignant tumors and the need to protect normal brain tissue. [...] Read more.
Brain tumors usually appear as masses formed by localized abnormal cell proliferation. Although complete removal of tumors is an ideal treatment goal, this process faces many challenges due to the aggressive nature of malignant tumors and the need to protect normal brain tissue. Therefore, early diagnosis is crucial to mitigate the harm posed by brain tumors. In this study, the classification accuracy is improved by improving the ResNet50 model. Specifically, the image is preprocessed and enhanced firstly, and the image is denoised by fractional calculus; then, transfer learning technology is adopted, the ECA attention mechanism is introduced, the convolutional layer in the residual block is optimized, and the multi-scale convolutional layer is fused. These optimization measures not only enhance the model’s ability to grasp the overall details but also improve its ability to recognize micro and macro features. This allows the model to understand data features more comprehensively and process image details more efficiently, thereby improving processing accuracy. In addition, the improved ResNet50 model is combined with EfficientNetB0 to further optimize performance and improve classification accuracy by utilizing EfficientNetB0’s efficient feature extraction capabilities through feature fusion. In this study, we used a brain tumor image dataset containing 5712 training images and 1311 validation images. The optimized ResNet50 model achieves a verification accuracy of 98.78%, which is 3.51% higher than the original model, and the Kappa value is also increased by 4.7%. At the same time, the lightweight design of the EfficientNetB0 improves performance while reducing uptime. These improvements can help diagnose brain tumors earlier and more accurately, thereby improving patient outcomes and survival rates. Full article
Show Figures

Figure 1

25 pages, 9990 KB  
Article
Bidirectional Mamba-Enhanced 3D Human Pose Estimation for Accurate Clinical Gait Analysis
by Chengjun Wang, Wenhang Su, Jiabao Li and Jiahang Xu
Fractal Fract. 2025, 9(9), 603; https://doi.org/10.3390/fractalfract9090603 - 17 Sep 2025
Viewed by 1084
Abstract
Three-dimensional human pose estimation from monocular video remains challenging for clinical gait analysis due to high computational cost and the need for temporal consistency. We present Pose3DM, a bidirectional Mamba-based state-space framework that models intra-frame joint relations and inter-frame dynamics with linear computational [...] Read more.
Three-dimensional human pose estimation from monocular video remains challenging for clinical gait analysis due to high computational cost and the need for temporal consistency. We present Pose3DM, a bidirectional Mamba-based state-space framework that models intra-frame joint relations and inter-frame dynamics with linear computational complexity. Replacing transformer self-attention with state-space modeling improves efficiency without sacrificing accuracy. We further incorporate fractional-order total-variation regularization to capture long-range dependencies and memory effects, enhancing temporal and spatial coherence in gait dynamics. On Human3.6M, Pose3DM-L achieves 37.9 mm MPJPE under Protocol 1 (P1) and 32.1 mm P-MPJPE under Protocol 2 (P2), with 127 M MACs per frame and 30.8 G MACs in total. Relative to MotionBERT, P1 and P2 errors decrease by 3.3% and 2.4%, respectively, with 82.5% fewer parameters and 82.3% fewer MACs per frame. Compared with MotionAGFormer-L, Pose3DM-L improves P1 by 0.5 mm and P2 by 0.4 mm while using 60.6% less computation: 30.8 G vs. 78.3 G total MACs and 127 M vs. 322 M per frame. On AUST-VisGait across six gait patterns, Pose3DM consistently yields lower MPJPE, standard error, and maximum error, enabling reliable extraction of key gait parameters from monocular video. These results highlight state-space models as a cost-effective route to real-time gait assessment using a single RGB camera. Full article
Show Figures

Figure 1

Back to TopTop