Adaptive Bandelet Transform and Transfer Learning for Geometry-Aware Thyroid Cancer Ultrasound Classification
Abstract
1. Introduction
- Methodological contributions: This study introduces a novel geometry-aware framework that integrates the BT with TL for TC classification in medical imaging, particularly for US images. The proposed approach exploits the capability of BT to adaptively model local geometric structures through quadtree-driven directional analysis, enabling the extraction of anisotropic and spatially coherent features that are not captured by conventional wavelet representations. By coupling these geometry-adaptive features with deep TL backbones, the framework enhances feature representation and robustness under limited labelled data conditions.
- Experimental contributions: Extensive experiments demonstrate that integrating BT with TL significantly improves the accuracy and reliability of TC classification compared with classical wavelet-based and standalone TL approaches. Several ImageNet-pretrained architectures are systematically evaluated to assess their generalisation capability on thyroid US images. The results identify the best-performing architecture among the tested models, highlighting its superior ability to extract discriminative geometric and textural features while reducing dependence on large annotated datasets.
2. Related Work
3. Preliminaries
3.1. Wavelet Transform
3.2. Bandlet Transform
3.3. Inductive TL
4. Methodology
| Algorithm 1 TN Classification Algorithm | |
| 1: function LoadDatasets | |
| 2: Dataset ← DDTI | ▹ 134 images: 14 benign, 62 malignant |
| 3: return Dataset | |
| 4: end function | |
| 5: function Preprocess(Dataset) | |
| 6: Benign ← GetBenignImages(Dataset) | ▹ 14 images |
| 7: Malignant ← GetMalignantImages(Dataset) | ▹ 62 images |
| 8: New_Benign ← SMOTE (Benign, target = 28) | ▹ Oversample benign |
| 9: Balanced_Dataset ← Combine(New_Benign, Malignant[0:28]) | |
| 10: Augmented_Dataset ← Augment(Balanced_Dataset, Techniques = {Brightness, Flip, Rotate, Resise_to_512 × 512}) | |
| 11: return Augmented_Dataset | ▹ Target: 2048 images |
| 12: end function | |
| 13: function ExtractFeatures(Augmented_Dataset) | |
| 14: Features ← [] | |
| 15: for each image in Augmented_Dataset do | |
| 16: Bandelet_Features ← ApplyBandeletTransform(image) | |
| 17: Features ← Add(Bandelet_Features) | |
| 18: end for | |
| 19: return Features | |
| 20: end function | |
| 21: function Classify(Augmented_Dataset, Features) | |
| 22: Train_Data ← Take80Percent(Augmented_Dataset) | ▹ 1638 images |
| 23: Val_Data ← Take20Percent(Augmented_Dataset) | ▹ 410 images |
| 24: Models ← {VGG16} | ▹ Simplified list |
| 25: for each model in Models do | |
| 26: LoadPretrained(model) | |
| 27: FineTune(model, Train_Data, Features) | |
| 28: Predictions ← Test(model, Val_Data) | |
| 29: Save(Predictions) | |
| 30: end for | |
| 31: return Predictions | |
| 32: end function | |
| 33: function Evaluate(Predictions, Val_Data) | |
| 34: for each model in Predictions do | |
| 35: Accuracy ← CalculateAccuracy(Predictions, Val_Data) | |
| 36: Sensitivity ← CalculateSensitivity(Predictions, Val_Data) | |
| 37: Display(model, Accuracy, Sensitivity) | |
| 38: end for | |
| 39: end function | |
| 40: function Main | |
| 41: Dataset ← LoadDatasets() | |
| 42: Augmented_Dataset ← Preprocess(Dataset) | |
| 43: Features ← ExtractFeatures(Augmented_Dataset) | |
| 44: Predictions ← Classify(Augmented_Dataset, Features) | |
| 45: Evaluate(Predictions, Augmented_Dataset) | |
| 46: end function | |
4.1. Input TC Datasets
4.2. Preprocessing
4.3. Bandelet Feature Selection
4.4. Deep Classification
4.5. Performance Metrics
5. Results and Discussion
5.1. Experiments
5.2. Statistical Performance Analysis
5.3. Clinical Implications and Decision Support
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| BT | bandelet transform |
| CNN | convolutional neural networks |
| US | ultrasound |
| DA | data augmentation |
| TC | thyroid cancer |
| TN | thyroid nodules |
| TL | transfer learning |
| WT | wavelet transform |
| DWT | discrete wavelet transform |
| BT | bandlet transform |
| DDTI | digital database of thyroid ultrasound images |
| BLR | binary logistic regression |
| FPN | feature pyramid network |
| SL-FCN | soft-label fully convolutional network |
| OCT | optical coherence tomography |
| SMOTE | synthetic minority oversampling technique |
| LSTM | long short-term memory |
| DL | deep learning |
References
- Cao, C.L.; Li, Q.L.; Tong, J.; Shi, L.N.; Li, W.X.; Xu, Y.; Cheng, J.; Du, T.T.; Li, J.; Cui, X.W. Artificial intelligence in thyroid ultrasound. Front. Oncol. 2023, 13, 1060702. [Google Scholar] [CrossRef]
- Habchi, Y.; Himeur, Y.; Kheddar, H.; Boukabou, A.; Atalla, S.; Chouchane, A.; Ouamane, A.; Mansoor, W. AI in thyroid cancer diagnosis: Techniques, trends, and future directions. Systems 2023, 11, 519. [Google Scholar] [CrossRef]
- Yu, X.; Wang, H.; Ma, L. Detection of thyroid nodules with ultrasound images based on deep learning. Curr. Med. Imaging 2020, 16, 174–180. [Google Scholar] [CrossRef]
- Mazari, A.C.; Kheddar, H. Deep learning-and transfer learning-based models for COVID-19 detection using radiography images. In Proceedings of the 2023 International Conference on Advances in Electronics, Control and Communication Systems (ICAECCS), Blida, Algeria, 6–7 March 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–4. [Google Scholar]
- Kheddar, H.; Himeur, Y.; Amira, A. BreathAI: Transfer Learning-Based Thermal Imaging for Automated Breathing Pattern Recognition. In Proceedings of the 2025 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 14–17 September 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 2612–2617. [Google Scholar]
- Beladgham, M.; Habchi, Y.; Ben Aissa, M.; Taleb-Ahmed, A. Medical video compression using bandelet based on lifting scheme and SPIHT coding: In search of high visual quality. Inform. Med. Unlocked 2019, 17, 100244, Correction in Inform. Med. Unlocked 2020, 21, 100474. [Google Scholar] [CrossRef]
- Mohammed, B.; Yassine, H.; Abdelmouneim, M.L.; Abdelmalik, T.A. A comparative study between bandelet and wavelet transform coupled by EZW and SPIHT coder for image compression. Int. J. Image Graph. Signal Process. 2013, 5, 9. [Google Scholar] [CrossRef]
- Habchi, Y.; Kheddar, H.; Himeur, Y.; Boukabou, A.; Atalla, S.; Mansoor, W.; Al-Ahmad, H. Deep transfer learning for kidney cancer diagnosis. arXiv 2024, arXiv:2408.04318. [Google Scholar] [CrossRef]
- Habchi, Y.; Beladgham, M.; Taleb-Ahmed, A. RGB Medical Video Compression Using Geometric Wavelet and SPIHT Coding. Int. J. Electr. Comput. Eng. 2016, 6, 1627–1636. [Google Scholar] [CrossRef]
- Habchi, Y.; Aimer, A.F.; Beladgham, M.; Bouddou, R. Ultra low bitrate retinal image compression using integer lifting scheme and subband encoder. Indones. J. Electr. Eng. Comput. Sci. (IJEECS) 2021, 24, 295–307. [Google Scholar]
- Ding, X.; Liu, Y.; Zhao, J.; Wang, R.; Li, C.; Luo, Q.; Shen, C. A novel wavelet-transform-based convolution classification network for cervical lymph node metastasis of papillary thyroid carcinoma in ultrasound images. Comput. Med. Imaging Graph. 2023, 109, 102298. [Google Scholar] [CrossRef]
- Habchi, Y.; Kheddar, H.; Himeur, Y.; Ghanem, M.C. Machine learning and transformers for thyroid carcinoma diagnosis. J. Vis. Commun. Image Represent. 2025, 115, 104668. [Google Scholar] [CrossRef]
- Pavithra, S.; Vanithamani, R.; Judith, J. Classification of stages of thyroid nodules in ultrasound images using transfer learning methods. In Proceedings of the Second International Conference on Image Processing and Capsule Networks (ICIPCN 2021); Springer: Cham, Switzerland, 2022; pp. 241–253. [Google Scholar]
- Sureshkumar, V.; Jaganathan, D.; Ravi, V.; Velleangiri, V.; Ravi, P. A Comparative Study on Thyroid Nodule Classification using Transfer Learning Methods. Open Bioinform. J. 2024, 17. [Google Scholar] [CrossRef]
- Vahdati, S.; Khosravi, B.; Robinson, K.A.; Rouzrokh, P.; Moassefi, M.; Akkus, Z.; Erickson, B.J. A multi-view deep learning model for thyroid nodules detection and characterization in ultrasound imaging. Bioengineering 2024, 11, 648. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, H.; Fan, H.; Yang, X.; Fan, J.; Wu, P.; Ni, Y.; Hu, S. Multimodal MRI Deep Learning for Predicting Central Lymph Node Metastasis in Papillary Thyroid Cancer. Cancers 2024, 16, 4042. [Google Scholar] [CrossRef] [PubMed]
- Gummalla, K.; Ganesan, S.; Pokhrel, S.; Somasiri, N. Enhanced early detection of thyroid abnormalities using a hybrid deep learning model. J. Innov. Image Process. 2024, 6, 244–261. [Google Scholar] [CrossRef]
- Chandana, K.H.; Prasan, U. Thyroid disease detection using CNN techniques. THYROID 2023, 55. Available online: https://advancedengineeringscience.com/article/pdf/2023/02-353.pdf (accessed on 15 December 2025).
- Wang, Z.; Qu, L.; Chen, Q.; Zhou, Y.; Duan, H.; Li, B.; Weng, Y.; Su, J.; Yi, W. Deep learning-based multifeature integration robustly predicts central lymph node metastasis in papillary thyroid cancer. BMC Cancer 2023, 23, 128. [Google Scholar] [CrossRef]
- Qi, Q.; Huang, X.; Zhang, Y.; Cai, S.; Liu, Z.; Qiu, T.; Cui, Z.; Zhou, A.; Yuan, X.; Zhu, W.; et al. Ultrasound image-based deep learning to assist in diagnosing gross extrathyroidal extension thyroid cancer: A retrospective multicenter study. eClinicalMedicine 2023, 58, 101905. [Google Scholar] [CrossRef]
- Zhang, F.; Sun, Y.; Wu, X.; Meng, C.; Xiang, M.; Huang, T.; Duan, W.; Wang, F.; Sun, Z. Analysis of the application value of ultrasound imaging diagnosis in the clinical staging of thyroid cancer. J. Oncol. 2022, 2022, 8030262. [Google Scholar] [CrossRef]
- Shah, A.A.; Daud, A.; Bukhari, A.; Alshemaimri, B.; Ahsan, M.; Younis, R. DEL-Thyroid: Deep ensemble learning framework for detection of thyroid cancer progression through genomic mutation. BMC Med. Inform. Decis. Mak. 2024, 24, 198. [Google Scholar] [CrossRef]
- Zhang, X.; Lee, V.C. Deep Learning Empowered Decision Support Systems for Thyroid Cancer Detection and Management. Procedia Comput. Sci. 2024, 237, 945–954. [Google Scholar] [CrossRef]
- Zhang, X.; Liu, F.; Lee, V.C.S.; Jassal, K.; Di Muzio, B.; Lee, J.C. Dynamic Ensemble Transfer Learning with Multi-View Ultrasonography for Improving Thyroid Cancer Diagnostic Reliability. J. Imaging Inform. Med. 2025; online ahead of print. [Google Scholar] [CrossRef] [PubMed]
- Chen, W.; Gu, Z.; Liu, Z.; Fu, Y.; Ye, Z.; Zhang, X.; Xiao, L. A new classification method in ultrasound images of benign and malignant thyroid nodules based on transfer learning and deep convolutional neural network. Complexity 2021, 2021, 6296811. [Google Scholar] [CrossRef]
- Ma, J.; Bao, L.; Lou, Q.; Kong, D. Transfer learning for automatic joint segmentation of thyroid and breast lesions from ultrasound images. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 363–372. [Google Scholar] [CrossRef]
- Bakht, A.B.; Javed, S.; Dina, R.; Almarzouqi, H.; Khandoker, A.; Werghi, N. Thyroid nodule cell classification in cytology images using transfer learning approach. In Proceedings of the 12th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2020); Springer: Cham, Switzerland, 2021; pp. 539–549. [Google Scholar]
- Lu, H.; Wang, H.; Zhang, Q.; Won, D.; Yoon, S.W. A dual-tree complex wavelet transform based convolutional neural network for human thyroid medical image segmentation. In Proceedings of the 2018 IEEE International Conference on Healthcare Informatics (ICHI), New York, NY, USA, 4–7 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 191–198. [Google Scholar]
- Wang, C.W.; Lin, K.Y.; Lin, Y.J.; Khalil, M.A.; Chu, K.L.; Chao, T.K. A soft label deep learning to assist breast cancer target therapy and thyroid cancer diagnosis. Cancers 2022, 14, 5312. [Google Scholar] [CrossRef]
- Sharma, R.; Mahanti, G.K.; Chakraborty, C.; Panda, G.; Rath, A. An IoT and deep learning-based smart healthcare framework for thyroid cancer detection. ACM Trans. Internet Technol. 2023. [Google Scholar] [CrossRef]
- Chao, Z.; Duan, X.; Jia, S.; Guo, X.; Liu, H.; Jia, F. Medical image fusion via discrete stationary wavelet transform and an enhanced radial basis function neural network. Appl. Soft. Comput. 2022, 118, 108542. [Google Scholar] [CrossRef]
- Boucherit, I.; Kheddar, H. Reinforced Residual Encoder–Decoder Network for Image Denoising via Deeper Encoding and Balanced Skip Connections. Big Data Cogn. Comput. 2025, 9, 82. [Google Scholar] [CrossRef]
- Rahate, A.J.; Quazi, R. A Review of Overcoming Speckle Noise Challenges in Ultrasound Imaging with Different Wavelet Transformation. Abdom. Imaging 2023, 2, 5. [Google Scholar]
- Bhonsle, D.; Saxena, K.; Sheikh, R.U.; Sahu, A.K.; Singh, P.; Rizvi, T. Wavelet based random noise removal from color images using Python. In Proceedings of the 2024 Fourth International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), Bhilai, India, 11–12 January 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–5. [Google Scholar]
- Le Pennec, E.; Mallat, S. Sparse geometric image representations with bandelets. IEEE Trans. Image Process. 2005, 14, 423–438. [Google Scholar] [CrossRef]
- Deo, B.S.; Pal, M.; Panigrahi, P.K.; Pradhan, A. An ensemble deep learning model with empirical wavelet transform feature for oral cancer histopathological image classification. Int. J. Data Sci. Anal. 2024, 20, 1005–1022. [Google Scholar] [CrossRef]
- DDTI. Available online: https://cimalab.unal.edu.co/software/detail/2 (accessed on 1 March 2023).
- Li, X.; Fu, C.; Xu, S.; Sham, C.W. Thyroid Ultrasound Image Database and Marker Mask Inpainting Method for Research and Development. Ultrasound Med. Biol. 2024, 50, 509–519. [Google Scholar] [CrossRef]
- Goodman, J.; Sarkani, S.; Mazzuchi, T. Distance-based probabilistic data augmentation for synthetic minority oversampling. ACM/IMS Trans. Data Sci. (TDS) 2022, 2, 40. [Google Scholar] [CrossRef]
- Khan, A.A.; Chaudhari, O.; Chandra, R. A review of ensemble learning and data augmentation models for class imbalanced problems: Combination, implementation and evaluation. Expert Syst. Appl. 2024, 244, 122778. [Google Scholar] [CrossRef]
- Fang, Y.; Liu, J.; Li, J.; Cheng, J.; Hu, J.; Yi, D.; Xiao, X.; Bhatti, U.A. Robust zero-watermarking algorithm for medical images based on SIFT and Bandelet-DCT. Multimed. Tools Appl. 2022, 81, 16863–16879. [Google Scholar] [CrossRef]
- Kheddar, H.; Himeur, Y.; Al-Maadeed, S.; Amira, A.; Bensaali, F. Deep transfer learning for automatic speech recognition: Towards better generalization. Knowl.-Based Syst. 2023, 277, 110851. [Google Scholar] [CrossRef]
- Habchi, Y.; Kheddar, H.; Himeur, Y. Ultrasound Images Classification of Thyroid Cancer using Deep Transfer Learning. In Proceedings of the 2024 International Conference on Telecommunications and Intelligent Systems (ICTIS), Djelfa, Algeria, 14–15 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]





| Ref. | Used Method | Datasets | BPM (%) | Limitations | Employing | ||
|---|---|---|---|---|---|---|---|
| WT | BT | TL | |||||
| [15] | Multi-view DL | Private data | Sensitivity: 84.00 Specificity: 63.00 F1 Score: 76.00 | Studies should validate the results on other imaging modalities | ✗ | ✗ | ✗ |
| [16] | AMMCNet | Private data | Accuracy: 85.70Sensitivity: 90.00 Specificity: 90.90 | Small sample size and single-centre data | ✗ | ✗ | ✗ |
| [22] | Deep EL | Ensembl IntOGen | Accuracy: 96.00 Sensitivity: 92.00 Specificity: 100 | Limited dataset may affect generalisability | ✗ | ✗ | ✗ |
| [17] | CNN with K-means | DDTI | Accuracy: 81.50 Precision: 97.40, Sensitivity: 83.10 | Model performance depends on annotated data | ✗ | ✗ | ✗ |
| [18] | Deep CNN | Private data | Accuracy: 97.20 | High computational process | ✗ | ✗ | ✗ |
| [23] | MC-CNNs | DDTI, UCI thyroid and private datasets | Acurracy: 98.70 | The DL models require high computational power | ✗ | ✗ | ✗ |
| [29] | SL-FCN | DISH and FISH Breast, and Thyroid Dataset | Dice Score: 89.00 | SL-FCN requires significant computational power | ✗ | ✗ | ✗ |
| [19] | BLR and CNN | Clinicopathological | AUC: 89.00 | Using BLR but limited to retrospective data and needs validation on larger datasets | ✗ | ✗ | ✗ |
| [30] | DL and EL | DDTI | Accuracy: 92.83 Precision: 87.76 Specificity: 88.89 | Relies on a single public dataset | ✗ | ✗ | ✗ |
| [20] | Mask R-CNN, ResNet-50 and FPN | Private data | Accuracy: 87.00 Sensitivity: 80.00 Specificity: 92.00 | Single-province dataset and requiring broader validation | ✗ | ✗ | ✗ |
| [25] | GoogLeNet with TL | Hospital and public thyroid US images | Accuracy: 96.04 F1 Score: 98.74 Precision: 98.42 | The model requires high computational power | ✗ | ✗ | ✓ |
| [26] | Multi-channel DenseNet | Private data | Accuracy: 92.57 Sensitivity: 98.69 F1 Score: 95.96 | Dependence on high-quality annotations | ✗ | ✗ | ✓ |
| [24] | Dynamic ensemble TL | Private data | Accuracy: 93.00Specificity: 95.00 F1 Score: 93.00 | Limited data sources | ✗ | ✗ | ✓ |
| [27] | Fine-tuned VGG-19 | Private data | Accuracy: 93.05 Sensitivity: 92.90 F1 Score: 92.80 | Limited comparison with other models | ✗ | ✗ | ✓ |
| [28] | WT-based CNN | Custom human thyroid OCT | Accuracy: 98.60 | The model adds extra computations compared to traditional CNNs | ✓ | ✗ | ✗ |
| [21] | AWT-AA | Private data | Accuracy: 95.00, Sensitivity: 97.50 Specificity: 86.00 | The study is based on US images from a single institution | ✓ | ✗ | ✗ |
| Attribute | Description |
|---|---|
| Image Count | 134 images |
| Image Format | PNG (some JPEG) |
| Image Resolution | pixels |
| Benign Cases | 14 images |
| Malignant Cases | 62 images |
| Image Modality | B-mode 2D greyscale US |
| Frame Rate | 15–30 fps (derived from video sequences) |
| US Equipment | Toshiba Nemio 30 and Nemio MX |
| Transducer Types | 12 MHz linear and convex transducers |
| Axial Resolution | 0.1–0.15 mm |
| Lateral Resolution | 0.5–1 mm |
| Penetration Depth | 4–6 cm |
| Field of View | 38–50 mm (linear), 60–80 mm (convex) |
| Dynamic Range | 50–70 dB |
| Transform | Accuracy | Sensitivity | Specificity | F1 |
|---|---|---|---|---|
| Wavelet | 0.9430 | 0.9302 | 0.9433 | 0.9584 |
| Bandelet (T = 10) | 0.9511 | 0.9622 | 0.9774 | 0.9807 |
| Bandelet (T = 20) | 0.9635 | 0.9108 | 0.9802 | 0.9645 |
| Bandelet (T = 30) | 0.9891 | 0.9811 | 0.9731 | 0.9889 |
| Bandelet (T = 40) | 0.9803 | 0.9723 | 0.9897 | 0.9746 |
| Bandelet (T = 50) | 0.9787 | 0.9794 | 0.9825 | 0.9764 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Habchi, Y.; Kheddar, H.; Ghanem, M.C.; Hwaidi, J. Adaptive Bandelet Transform and Transfer Learning for Geometry-Aware Thyroid Cancer Ultrasound Classification. Diagnostics 2026, 16, 554. https://doi.org/10.3390/diagnostics16040554
Habchi Y, Kheddar H, Ghanem MC, Hwaidi J. Adaptive Bandelet Transform and Transfer Learning for Geometry-Aware Thyroid Cancer Ultrasound Classification. Diagnostics. 2026; 16(4):554. https://doi.org/10.3390/diagnostics16040554
Chicago/Turabian StyleHabchi, Yassine, Hamza Kheddar, Mohamed Chahine Ghanem, and Jamal Hwaidi. 2026. "Adaptive Bandelet Transform and Transfer Learning for Geometry-Aware Thyroid Cancer Ultrasound Classification" Diagnostics 16, no. 4: 554. https://doi.org/10.3390/diagnostics16040554
APA StyleHabchi, Y., Kheddar, H., Ghanem, M. C., & Hwaidi, J. (2026). Adaptive Bandelet Transform and Transfer Learning for Geometry-Aware Thyroid Cancer Ultrasound Classification. Diagnostics, 16(4), 554. https://doi.org/10.3390/diagnostics16040554

