Fractional and Fractal Methods in Biomedical Imaging and Time Series Learning

A special issue of Fractal and Fractional (ISSN 2504-3110). This special issue belongs to the section "Optimization, Big Data, and AI/ML".

Deadline for manuscript submissions: 31 August 2026 | Viewed by 14488

Special Issue Editors


E-Mail Website
Guest Editor
The Alan Turing Institute, British Library, 96 Euston Road, London NW1 2DB, UK
Interests: artificial intelligence; computer vision; healthcare AI; robotics; fractional calculus
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China
Interests: artificial intelligence; machine learning; robotics; fractional calculus

E-Mail Website
Guest Editor Assistant
School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China
Interests: human gait analysis; computer vision; medical robotics

Special Issue Information

Dear Colleagues,

Fractional and fractal methods are increasingly used in biomedical research for their ability to handle model complexity, memory effects, and self-similarity—key traits in physiological signals and biological structures. This Special Issue will focus on recent advances at the intersection of fractional calculus, fractal geometry, and machine learning, particularly in biomedical imaging and time series analysis.

We welcome the submission of original research, reviews, and applied studies that introduce new models, algorithms, or learning frameworks using fractional or fractal approaches. Potential topics include the use of fractional differential equations in disease modeling, fractal descriptors in image analysis, fractional-order neural networks in the analysis of biomedical signals, and fractal-based biomarkers in health monitoring.

Interdisciplinary contributions spanning applied mathematics, computer vision, biomedical engineering, and AI in healthcare are encouraged, while submissions demonstrating a real-world impact through the use of clinical validation, public datasets, or open-source tools are especially valued.

Dr. Ziyang Wang
Prof. Dr. Chengjun Wang
Guest Editors

Dr. Jiabao Li
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Fractal and Fractional is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • fractional calculus
  • medical robotic control
  • biomedical signal processing
  • medical image analysis
  • time series machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

41 pages, 17100 KB  
Article
Integrated Fractal Dimensions and Imbalance–Deviation Features for Smart-Insole Walking Gait Analysis: Application to Parkinson’s Disease Detection
by Hao Li, Jun Ma, Boqiang Cao, Xunhuan Ren, Yiming Chen, Qicheng Guo, Bohan Li, Illa Baryskievic, Anatoliy Baryskievic and Viktar Tsviatkou
Fractal Fract. 2026, 10(5), 297; https://doi.org/10.3390/fractalfract10050297 - 28 Apr 2026
Abstract
Gait impairment is a common motor manifestation of Parkinson’s disease (PD), which is also frequently accompanied by other motor abnormalities such as bradykinesia, rigidity, postural instability, and movement asymmetry. These motor impairments are closely associated with reduced mobility and increased fall risk. Although [...] Read more.
Gait impairment is a common motor manifestation of Parkinson’s disease (PD), which is also frequently accompanied by other motor abnormalities such as bradykinesia, rigidity, postural instability, and movement asymmetry. These motor impairments are closely associated with reduced mobility and increased fall risk. Although wearable plantar insole sensing provides a promising basis for objective gait assessment, existing studies have mainly focused on conventional time- or frequency-domain descriptors, whereas the nonlinear complexity of gait, laterality-related imbalance, and deviation from normal gait patterns remain insufficiently characterized in an integrated manner. To address this gap, this paper proposes FID-Gait, which is a three-domain fusion framework for PD identification using instrumented insole data. The framework combines automated gait-cycle segmentation with multidomain feature modeling, including a fractal domain for nonlinear gait complexity, a plantar-loading–phase imbalance (PLPI) domain for loading asymmetry and temporal disturbance, and a covariance-adjusted deviation (CAD) domain for measuring deviation from normal gait patterns. Experiments on the PhysioNet Gait in Parkinson’s Disease dataset showed that FID-Gait achieved strong discriminative performance under multiple evaluation protocols. At the gait-cycle level, the selected MLP classifier achieved an accuracy of 99.11% and an F1-score of 99.47%. At the subject level, the selected AdaBoost classifier achieved the highest accuracy of 90.22% and the best F1-score reached 93.02%. Five-fold cross-validation further supported the robustness of the proposed representation, and leave-one-subject-out evaluation provided preliminary evidence of subject-independent generalization. Overall, FID-Gait provides an effective and interpretable framework for PD gait characterization and identification in offline experimental settings. Full article
Show Figures

Figure 1

31 pages, 17740 KB  
Article
HR-UMamba++: A High-Resolution Multi-Directional Mamba Framework for Coronary Artery Segmentation in X-Ray Coronary Angiography
by Xiuhan Zhang, Peng Lu, Zongsheng Zheng and Wenhui Li
Fractal Fract. 2026, 10(1), 43; https://doi.org/10.3390/fractalfract10010043 - 9 Jan 2026
Viewed by 992
Abstract
Coronary artery disease (CAD) remains a leading cause of mortality worldwide, and accurate coronary artery segmentation in X-ray coronary angiography (XCA) is challenged by low contrast, structural ambiguity, and anisotropic vessel trajectories, which hinder quantitative coronary angiography. We propose HR-UMamba++, a U-Mamba-based framework [...] Read more.
Coronary artery disease (CAD) remains a leading cause of mortality worldwide, and accurate coronary artery segmentation in X-ray coronary angiography (XCA) is challenged by low contrast, structural ambiguity, and anisotropic vessel trajectories, which hinder quantitative coronary angiography. We propose HR-UMamba++, a U-Mamba-based framework centered on a rotation-aligned multi-directional state-space scan for modeling long-range vessel continuity across multiple orientations. To preserve thin distal branches, the framework is equipped with (i) a persistent high-resolution bypass that injects undownsampled structural details and (ii) a UNet++-style dense decoder topology for cross-scale topological fusion. On an in-house dataset of 739 XCA images from 374 patients, HR-UMamba++ is evaluated using eight segmentation metrics, fractal-geometry descriptors, and multi-view expert scoring. Compared with U-Net, Attention U-Net, HRNet, U-Mamba, DeepLabv3+, and YOLO11-seg, HR-UMamba++ achieves the best performance (Dice 0.8706, IoU 0.7794, HD95 16.99), yielding a relative Dice improvement of 6.0% over U-Mamba and reducing the deviation in fractal dimension by up to 57% relative to U-Net. Expert evaluation across eight angiographic views yields a mean score of 4.24 ± 0.49/5 with high inter-rater agreement. These results indicate that HR-UMamba++ produces anatomically faithful coronary trees and clinically useful segmentations that can serve as robust structural priors for downstream quantitative coronary analysis. Full article
Show Figures

Figure 1

26 pages, 6899 KB  
Article
When RNN Meets CNN and ViT: The Development of a Hybrid U-Net for Medical Image Segmentation
by Ziru Wang and Ziyang Wang
Fractal Fract. 2026, 10(1), 18; https://doi.org/10.3390/fractalfract10010018 - 28 Dec 2025
Cited by 3 | Viewed by 2553
Abstract
Deep learning for semantic segmentation has made significant advances in recent years, achieving state-of-the-art performance. Medical image segmentation, as a key component of healthcare systems, plays a vital role in the diagnosis and treatment planning of diseases. Due to the fractal and scale-invariant [...] Read more.
Deep learning for semantic segmentation has made significant advances in recent years, achieving state-of-the-art performance. Medical image segmentation, as a key component of healthcare systems, plays a vital role in the diagnosis and treatment planning of diseases. Due to the fractal and scale-invariant nature of biological structures, effective medical image segmentation requires models capable of capturing hierarchical and self-similar representations across multiple spatial scales. In this paper, a Recurrent Neural Network (RNN) is explored within the Convolutional Neural Network (CNN) and Vision Transformer (ViT)-based hybrid U-shape network, named RCV-UNet. First, the ViT-based layer was developed in the bottleneck to effectively capture the global context of an image and establish long-range dependencies through the self-attention mechanism. Second, recurrent residual convolutional blocks (RRCBs) were introduced in both the encoder and decoder to enhance the ability to capture local features and preserve fine details. Third, by integrating the global feature extraction capability of ViT with the local feature enhancement strength of RRCBs, RCV-UNet achieved promising global consistency and boundary refinement, addressing key challenges in medical image segmentation. From a fractal–fractional perspective, the multi-scale encoder–decoder hierarchy and attention-driven aggregation in RCV-UNet naturally accommodate fractal-like, scale-invariant regularity, while the recurrent and residual connections approximate fractional-order dynamics in feature propagation, enabling continuous and memory-aware representation learning. The proposed RCV-UNet was evaluated on four different modalities of images, including CT, MRI, Dermoscopy, and ultrasound, using the Synapse, ACDC, ISIC 2018, and BUSI datasets. Experimental results demonstrate that RCV-UNet outperforms other popular baseline methods, achieving strong performance across different segmentation tasks. The code of the proposed method will be made publicly available. Full article
Show Figures

Figure 1

28 pages, 1544 KB  
Article
FD-HCL: A Fractal-Dimension-Guided Hierarchical Contrastive Learning Dual-Student Framework for Semi-Supervised Medical Segmentation
by Xinhua Dong, Wenjun Xu, Zhigang Xu, Hongmu Han, Hui Zhang, Juan Mao and Guangwei Dong
Fractal Fract. 2025, 9(12), 828; https://doi.org/10.3390/fractalfract9120828 - 18 Dec 2025
Cited by 1 | Viewed by 663
Abstract
Semi-supervised learning (SSL) is critical for medical image segmentation but often struggles with network dependency and pseudo-label error accumulation. To address these issues, we propose a fractal-dimension-guided hierarchical contrastive learning dual-student framework(FD-HCL). We extend the Mean Teacher architecture with a dual-student design and [...] Read more.
Semi-supervised learning (SSL) is critical for medical image segmentation but often struggles with network dependency and pseudo-label error accumulation. To address these issues, we propose a fractal-dimension-guided hierarchical contrastive learning dual-student framework(FD-HCL). We extend the Mean Teacher architecture with a dual-student design and introduce an independence-aware exponential moving average (I-EMA) update mechanism to mitigate model coupling. For enhanced feature learning, we devise a hierarchical contrastive learning (HCL) mechanism guided by voxel uncertainty, spanning global, high-confidence, and low-confidence regions. We further improve structural integrity by incorporating a fractal-dimension (FD)-weighted consistency loss and integrating a novel uncertainty-aware bidirectional copy–paste (UB-CP) augmentation. Extensive experiments on the LA and BraTS 2019 datasets demonstrate the state-of-the-art performance of our framework across 10% and 20% labeled data settings. On the LA dataset with 10% labeled data, our method achieved a Dice score that outperformed the best existing approach by 0.68%. Similarly, under the 10% labeling setting on the BraTS 2019 dataset, we surpassed the state-of-the-art Dice score by 0.55%. Full article
Show Figures

Figure 1

18 pages, 3112 KB  
Article
Denatured Recognition of Biological Tissue Using Ultrasonic Phase Space Reconstruction and CBAM-EfficientNet-B0 During HIFU Therapy
by Bei Liu, Haitao Zhu and Xian Zhang
Fractal Fract. 2025, 9(12), 819; https://doi.org/10.3390/fractalfract9120819 - 15 Dec 2025
Viewed by 502
Abstract
This study proposes an automatic denatured recognition method of biological tissue during high-intensity focused ultrasound (HIFU) therapy. The technique integrates ultrasonic phase space reconstruction (PSR) with a convolutional block attention mechanism-enhanced EfficientNet-B0 model (CBAM-EfficientNet-B0). Ultrasonic echo signals are first transformed into high-dimensional phase [...] Read more.
This study proposes an automatic denatured recognition method of biological tissue during high-intensity focused ultrasound (HIFU) therapy. The technique integrates ultrasonic phase space reconstruction (PSR) with a convolutional block attention mechanism-enhanced EfficientNet-B0 model (CBAM-EfficientNet-B0). Ultrasonic echo signals are first transformed into high-dimensional phase space reconstruction trajectory diagrams using PSR, which reveal distinct fractal and chaotic characteristics to analyze tissue complexity. The CBAM module is incorporated into EfficientNet-B0 to enhance feature extraction from these nonlinear dynamic representations by focusing on critical channels and spatial regions. The network is further optimized with Dropout and Scaled Exponential Linear Units (SeLUs) to prevent overfitting, alongside a cosine annealing learning rate scheduler. Experimental results demonstrate the superior performance of the proposed CBAM-EfficientNet-B0 model, achieving a high recognition accuracy of 99.57% and outperforming five benchmark CNN models (EfficientNet-B0, ResNet101, DenseNet201, ResNet18, and VGG16). The method avoids the subjectivity and uncertainty inherent in traditional manual feature extraction, enabling effective identification of HIFU-induced tissue denaturation. This work confirms the significant potential of combining nonlinear dynamics, fractal analysis, and deep learning for accurate, real-time monitoring in HIFU therapy. Full article
Show Figures

Figure 1

22 pages, 38803 KB  
Article
VG-SAM: Visual In-Context Guided SAM for Universal Medical Image Segmentation
by Gang Dai, Qingfeng Wang, Yutao Qin, Gang Wei and Shuangping Huang
Fractal Fract. 2025, 9(11), 722; https://doi.org/10.3390/fractalfract9110722 - 8 Nov 2025
Cited by 3 | Viewed by 1929
Abstract
Medical image segmentation, driven by the intrinsic fractal characteristics of biological patterns, plays a crucial role in medical image analysis. Recently, universal image segmentation, which aims to build models that generalize robustly to unseen anatomical structures and imaging modalities, has emerged as a [...] Read more.
Medical image segmentation, driven by the intrinsic fractal characteristics of biological patterns, plays a crucial role in medical image analysis. Recently, universal image segmentation, which aims to build models that generalize robustly to unseen anatomical structures and imaging modalities, has emerged as a promising research direction. To achieve this, previous solutions typically follow the in-context learning (ICL) framework, leveraging segmentation priors from a few labeled in-context references to improve prediction performance on out-of-distribution samples. However, these ICL-based methods often overlook the quality of the in-context set and struggle with capturing intricate anatomical details, thus limiting their segmentation accuracy. To address these issues, we propose VG-SAM, which employs a multi-scale in-context retrieval phase and a visual in-context guided segmentation phase. Specifically, inspired by the hierarchical and self-similar properties in fractal structures, we introduce a multi-level feature similarity strategy to select in-context samples that closely match the query image, thereby ensuring the quality of the in-context samples. In the segmentation phase, we propose to generate multi-granularity visual prompts based on the high-quality priors from the selected in-context set. Following this, these visual prompts, along with the semantic guidance signal derived from the in-context set, are seamlessly integrated into an adaptive fusion module, which effectively guides the Segment Anything Model (SAM) with powerful segmentation capabilities to achieve accurate predictions on out-of-distribution query images. Extensive experiments across multiple datasets demonstrate the effectiveness and superiority of our VG-SAM over the state-of-the-art (SOTA) methods. Notably, under the challenging one-shot reference setting, our VG-SAM surpasses SOTA methods by an average of 6.61% in DSC across all datasets. Full article
Show Figures

Figure 1

29 pages, 8202 KB  
Article
Continuous Lower-Limb Joint Angle Prediction Under Body Weight-Supported Training Using AWDF Model
by Li Jin, Liuyi Ling, Zhipeng Yu, Liyu Wei and Yiming Liu
Fractal Fract. 2025, 9(10), 655; https://doi.org/10.3390/fractalfract9100655 - 11 Oct 2025
Viewed by 1167
Abstract
Exoskeleton-assisted bodyweight support training (BWST) has demonstrated enhanced neurorehabilitation outcomes in which joint motion prediction serves as the critical foundation for adaptive human–machine interactive control. However, joint angle prediction under dynamic unloading conditions remains unexplored. This study introduces an adaptive wavelet-denoising fusion (AWDF) [...] Read more.
Exoskeleton-assisted bodyweight support training (BWST) has demonstrated enhanced neurorehabilitation outcomes in which joint motion prediction serves as the critical foundation for adaptive human–machine interactive control. However, joint angle prediction under dynamic unloading conditions remains unexplored. This study introduces an adaptive wavelet-denoising fusion (AWDF) model to predict lower-limb joint angles during BWST. Utilizing a custom human-tracking bodyweight support system, time series data of surface electromyography (sEMG), and inertial measurement unit (IMU) from ten adults were collected across graded bodyweight support levels (BWSLs) ranging from 0% to 40%. Systematic comparative experiments evaluated joint angle prediction performance among five models: the sEMG-based model, kinematic fusion model, wavelet-enhanced fusion model, late fusion model, and the proposed AWDF model, tested across prediction time horizons of 30–150 ms and BWSL gradients. Experimental results demonstrate that increasing BWSLs prolonged gait cycle duration and modified muscle activation patterns, with a concomitant decrease in the fractal dimension of sEMG signals. Extended prediction time degraded joint angle estimation accuracy, with 90 ms identified as the optimal tradeoff between system latency and prediction advancement. Crucially, this study reveals an enhancement in prediction performance with increased BWSLs. The proposed AWDF model demonstrated robust cross-condition adaptability for hip and knee angle prediction, achieving average root mean square errors (RMSE) of 1.468° and 2.626°, Pearson correlation coefficients (CC) of 0.983 and 0.973, and adjusted R2 values of 0.992 and 0.986, respectively. This work establishes the first computational framework for BWSL-adaptive joint prediction, advancing human–machine interaction in exoskeleton-assisted neurorehabilitation. Full article
Show Figures

Figure 1

26 pages, 1825 KB  
Article
Deep Brain Tumor Lesion Classification Network: A Hybrid Method Optimizing ResNet50 and EfficientNetB0 for Enhanced Feature Extraction
by Jing Lin, Longhua Huang, Liming Ding and Shen Yan
Fractal Fract. 2025, 9(9), 614; https://doi.org/10.3390/fractalfract9090614 - 22 Sep 2025
Cited by 3 | Viewed by 1893
Abstract
Brain tumors usually appear as masses formed by localized abnormal cell proliferation. Although complete removal of tumors is an ideal treatment goal, this process faces many challenges due to the aggressive nature of malignant tumors and the need to protect normal brain tissue. [...] Read more.
Brain tumors usually appear as masses formed by localized abnormal cell proliferation. Although complete removal of tumors is an ideal treatment goal, this process faces many challenges due to the aggressive nature of malignant tumors and the need to protect normal brain tissue. Therefore, early diagnosis is crucial to mitigate the harm posed by brain tumors. In this study, the classification accuracy is improved by improving the ResNet50 model. Specifically, the image is preprocessed and enhanced firstly, and the image is denoised by fractional calculus; then, transfer learning technology is adopted, the ECA attention mechanism is introduced, the convolutional layer in the residual block is optimized, and the multi-scale convolutional layer is fused. These optimization measures not only enhance the model’s ability to grasp the overall details but also improve its ability to recognize micro and macro features. This allows the model to understand data features more comprehensively and process image details more efficiently, thereby improving processing accuracy. In addition, the improved ResNet50 model is combined with EfficientNetB0 to further optimize performance and improve classification accuracy by utilizing EfficientNetB0’s efficient feature extraction capabilities through feature fusion. In this study, we used a brain tumor image dataset containing 5712 training images and 1311 validation images. The optimized ResNet50 model achieves a verification accuracy of 98.78%, which is 3.51% higher than the original model, and the Kappa value is also increased by 4.7%. At the same time, the lightweight design of the EfficientNetB0 improves performance while reducing uptime. These improvements can help diagnose brain tumors earlier and more accurately, thereby improving patient outcomes and survival rates. Full article
Show Figures

Figure 1

25 pages, 9990 KB  
Article
Bidirectional Mamba-Enhanced 3D Human Pose Estimation for Accurate Clinical Gait Analysis
by Chengjun Wang, Wenhang Su, Jiabao Li and Jiahang Xu
Fractal Fract. 2025, 9(9), 603; https://doi.org/10.3390/fractalfract9090603 - 17 Sep 2025
Cited by 1 | Viewed by 3193
Abstract
Three-dimensional human pose estimation from monocular video remains challenging for clinical gait analysis due to high computational cost and the need for temporal consistency. We present Pose3DM, a bidirectional Mamba-based state-space framework that models intra-frame joint relations and inter-frame dynamics with linear computational [...] Read more.
Three-dimensional human pose estimation from monocular video remains challenging for clinical gait analysis due to high computational cost and the need for temporal consistency. We present Pose3DM, a bidirectional Mamba-based state-space framework that models intra-frame joint relations and inter-frame dynamics with linear computational complexity. Replacing transformer self-attention with state-space modeling improves efficiency without sacrificing accuracy. We further incorporate fractional-order total-variation regularization to capture long-range dependencies and memory effects, enhancing temporal and spatial coherence in gait dynamics. On Human3.6M, Pose3DM-L achieves 37.9 mm MPJPE under Protocol 1 (P1) and 32.1 mm P-MPJPE under Protocol 2 (P2), with 127 M MACs per frame and 30.8 G MACs in total. Relative to MotionBERT, P1 and P2 errors decrease by 3.3% and 2.4%, respectively, with 82.5% fewer parameters and 82.3% fewer MACs per frame. Compared with MotionAGFormer-L, Pose3DM-L improves P1 by 0.5 mm and P2 by 0.4 mm while using 60.6% less computation: 30.8 G vs. 78.3 G total MACs and 127 M vs. 322 M per frame. On AUST-VisGait across six gait patterns, Pose3DM consistently yields lower MPJPE, standard error, and maximum error, enabling reliable extraction of key gait parameters from monocular video. These results highlight state-space models as a cost-effective route to real-time gait assessment using a single RGB camera. Full article
Show Figures

Figure 1

Back to TopTop