Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Keywords = TurkerNeXtV2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 3163 KB  
Article
TurkerNeXtV2: An Innovative CNN Model for Knee Osteoarthritis Pressure Image Classification
by Omer Esmez, Gulnihal Deniz, Furkan Bilek, Murat Gurger, Prabal Datta Barua, Sengul Dogan, Mehmet Baygin and Turker Tuncer
Diagnostics 2025, 15(19), 2478; https://doi.org/10.3390/diagnostics15192478 - 27 Sep 2025
Abstract
Background/Objectives: Lightweight CNNs for medical imaging remain limited. We propose TurkerNeXtV2, a compact CNN that introduces two new blocks: a pooling-based attention with an inverted bottleneck (TNV2) and a hybrid downsampling module. These blocks improve stability and efficiency. The aim is to achieve [...] Read more.
Background/Objectives: Lightweight CNNs for medical imaging remain limited. We propose TurkerNeXtV2, a compact CNN that introduces two new blocks: a pooling-based attention with an inverted bottleneck (TNV2) and a hybrid downsampling module. These blocks improve stability and efficiency. The aim is to achieve transformer-level effectiveness while keeping the simplicity, low computational cost, and deployability of CNNs. Methods: The model was first pretrained on the Stable ImageNet-1k benchmark and then fine-tuned on a collected plantar-pressure OA dataset. We also evaluated the model on a public blood-cell image dataset. Performance was measured by accuracy, precision, recall, and F1-score. Inference time (images per second) was recorded on an RTX 5080 GPU. Grad-CAM was used for qualitative explainability. Results: During pretraining on Stable ImageNet-1k, the model reached a validation accuracy of 87.77%. On the OA test set, the model achieved 93.40% accuracy (95% CI: 91.3–95.2%) with balanced precision and recall above 90%. On the blood-cell dataset, the test accuracy was 98.52%. The average inference time was 0.0078 s per image (≈128.8 images/s), which is comparable to strong CNN baselines and faster than the transformer baselines tested under the same settings. Conclusions: TurkerNeXtV2 delivers high accuracy with low computational cost. The pooling-based attention (TNV2) and the hybrid downsampling enable a lightweight yet effective design. The model is suitable for real-time and clinical use. Future work will include multi-center validation and broader tests across imaging modalities. Full article
Show Figures

Figure 1

Back to TopTop