Diagnosis of Mesothelioma Using Image Segmentation and Class-Based Deep Feature Transformations
Abstract
1. Introduction
- -
- Use of the segment anything model (SAM) for region-focused segmentation of CT images.
- -
- Integration of transformer-based models for class-based feature extraction.
- -
- New use of class-based image transformation through Decoder, GAN, and NeRV techniques.
- -
- Application of a discriminative score-based selection to determine the most informative image representations.
- -
- Achieving high diagnostic accuracy through residual-based SVM classification.
2. Materials and Methodology
2.1. CT Image Dataset
2.2. Segment Anything Model
- Image Encoder: Processes the input image and extracts visual information. This information is converted into a high-dimensional and deep visual representation. The resulting representation is prepared for use in the next stage.
- Prompt Encoder: Processes commands provided by the user (e.g., a single point, multiple points, etc.). These commands determine which part of the image the model should focus on. The commands are converted into vectors that the model can understand.
- Mask Decoder: Combines the image representations and command vectors obtained from the previous two stages to create a segmentation mask for the focused area. In short, it predicts which pixel belongs to which object. The SAM can generate multiple masks probabilistically; however, the mask with the highest probability is selected during the process [15].
2.3. Transformer Models
2.4. Deep Generative Approaches
2.4.1. DecoderMLP
2.4.2. GAN Generator
2.4.3. NeRV-like
2.5. Discriminative Score Method
2.6. Proposed Hybrid Approach
3. Experimental Analysis
3.1. Experimental Settings
3.2. Experimental Results
4. Discussion
- Meaningful regions were highlighted in CT images using the SAM method; deep learning models were enabled to focus on these regions and leave meaningless regions in the background.
- Instead of extracting a large number of feature columns from FC layers using recently popular transformer architectures (CaiT, PVT-v2), a new final layer was added that produces feature columns as efficient as the number of classes. This eliminates unnecessary feature columns from the hybrid model.
- Numerical values in feature columns are represented as 2D images using generative techniques such as Decoder, GAN, and NeRV. This method allows approaches that operate on images to extract more diverse and rich numerical content from their architectural structures.
- Alternative image sets were created using generative techniques (Decoder, GAN, and NeRV). The best set was selected using a discriminative scoring method to prevent the model from considering unnecessary images.
- Maintenance and update difficulty: The model’s structure includes many components, which may make it difficult to maintain and develop in the long term.
- The complexity of the decision-making process: Hybrid systems consisting of multiple layers and components can make interpreting the model’s outputs difficult. Such complex structures may also result in additional time costs during the training and inference processes.
- Computational resource requirements: Integrating different architectures, such as the SAM, transformer, GAN, Decoder, and NeRV, may require high processing power and memory in real-time applications.
- The proposed hybrid model is limited by the use of a single-center CT dataset that is not publicly accessible due to ethical constraints. The absence of external validation on independent datasets may limit the generalizability of the findings. Future studies should aim to evaluate the performance of the model on multicenter or publicly available datasets of mesothelioma or other related diseases to validate its broader applicability.
5. Conclusions
- -
- Increased interpretability through visualization of numerical data.
- -
- Production of stable and transparent results thanks to a simplified feature selection mechanism.
- -
- Flexible architecture that can adapt to different clinical scenarios and treatment protocols.
- -
- Elimination of negativity in the decision-making processes of expert physicians; objective decision-making mechanism.
- -
- Prevention of confusion with different disease types in the early diagnosis process.
- -
- These features demonstrate that the model has a broad application potential not only in mesothelioma diagnosis but also in other medical imaging fields.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Popat, S.; Baas, P.; Faivre-Finn, C.; Girard, N.; Nicholson, A.G.; Nowak, A.K.; Opitz, I.; Scherpereel, A.; Reck, M. Malignant Pleural Mesothelioma: ESMO Clinical Practice Guidelines for Diagnosis, Treatment and Follow-Up. Ann. Oncol. 2022, 33, 129–142. [Google Scholar] [CrossRef] [PubMed]
- Odgerel, C.-O.; Takahashi, K.; Sorahan, T.; Driscoll, T.; Fitzmaurice, C.; Yoko-o, M.; Sawanyawisuth, K.; Furuya, S.; Tanaka, F.; Horie, S.; et al. Estimation of the Global Burden of Mesothelioma Deaths from Incomplete National Mortality Data. Occup. Environ. Med. 2017, 74, 851–858. [Google Scholar] [CrossRef] [PubMed]
- Metintas, M.; Hillerdal, G.; Metintas, S. Malignant Mesothelioma Due to Environmental Exposure to Erionite: Follow-Up of a Turkish Emigrant Cohort. Eur. Respir. J. 1999, 13, 523–526. [Google Scholar] [CrossRef]
- Ahmadzada, T.; Reid, G.; Kao, S. Biomarkers in Malignant Pleural Mesothelioma: Current Status and Future Directions. J. Thorac. Dis. 2018, 10, S1003–S1007. [Google Scholar] [CrossRef] [PubMed]
- Courtiol, P.; Maussion, C.; Moarii, M.; Pronier, E.; Pilcer, S.; Sefta, M.; Manceron, P.; Toldo, S.; Zaslavskiy, M.; Le Stang, N.; et al. Deep Learning-Based Classification of Mesothelioma Improves Prediction of Patient Outcome. Nat. Med. 2019, 25, 1519–1525. [Google Scholar] [CrossRef]
- Baas, P.; Scherpereel, A.; Nowak, A.K.; Fujimoto, N.; Peters, S.; Tsao, A.S.; Mansfield, A.S.; Popat, S.; Jahan, T.; Antonia, S.; et al. First-Line Nivolumab Plus Ipilimumab in Unresectable Malignant Pleural Mesothelioma (CheckMate 743): A Multicentre, Randomised, Open-Label, Phase 3 Trial. Lancet 2021, 397, 375–386. [Google Scholar] [CrossRef]
- Szlosarek, P.W.; Creelan, B.C.; Sarkodie, T.; Nolan, L.; Taylor, P.; Olevsky, O.; Grosso, F.; Cortinovis, D.; Chitnis, M.; Roy, A.; et al. Pegargiminase Plus First-Line Chemotherapy in Patients with Nonepithelioid Pleural Mesothelioma. JAMA Oncol. 2024, 10, 475. [Google Scholar] [CrossRef]
- Luu, V.P.; Fiorini, M.; Combes, S.; Quemeneur, E.; Bonneville, M.; Bousquet, P.J. Challenges of Artificial Intelligence in Precision Oncology: Public-Private Partnerships Including National Health Agencies as an Asset to Make It Happen. Ann. Oncol. 2024, 35, 154–158. [Google Scholar] [CrossRef]
- Kitajima, K.; Matsuo, H.; Kono, A.; Kuribayashi, K.; Kijima, T.; Hashimoto, M.; Hasegawa, S.; Murakami, T.; Yamakado, K. Deep Learning with Deep Convolutional Neural Network Using FDG-PET/CT for Malignant Pleural Mesothelioma Diagnosis. Oncotarget 2021, 12, 1187–1196. [Google Scholar] [CrossRef]
- Gill, T.S.; Shirazi, M.A.; Zaidi, S.S.H. Early Detection of Mesothelioma Using Machine Learning Algorithms. In Proceedings of the IEEC 2023, Karachi, Pakistan, 20 September 2023; MDPI: Basel, Switzerland, 2023; p. 6. [Google Scholar]
- Cheng, T.S.Y.; Liao, X. Binary Classification of Malignant Mesothelioma: A Comparative Study. J. Data Sci. 2023, 21, 205–224. [Google Scholar] [CrossRef]
- Kidd, A.C.; Anderson, O.; Cowell, G.W.; Weir, A.J.; Voisey, J.P.; Evison, M.; Tsim, S.; Goatman, K.A.; Blyth, K.G. Fully Automated Volumetric Measurement of Malignant Pleural Mesothelioma by Deep Learning AI: Validation and Comparison with Modified RECIST Response Criteria. Thorax 2022, 77, 1251–1259. [Google Scholar] [CrossRef]
- Choudhury, A. Predicting Cancer Using Supervised Machine Learning: Mesothelioma. Technol. Health Care 2021, 29, 45–58. [Google Scholar] [CrossRef]
- Mazurowski, M.A.; Dong, H.; Gu, H.; Yang, J.; Konz, N.; Zhang, Y. Segment Anything Model for Medical Image Analysis: An Experimental Study. Med. Image Anal. 2023, 89, 102918. [Google Scholar] [CrossRef]
- Wu, J.; Wang, Z.; Hong, M.; Ji, W.; Fu, H.; Xu, Y.; Xu, M.; Jin, Y. Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation. Med. Image Anal. 2025, 102, 103547. [Google Scholar] [CrossRef] [PubMed]
- Aydın, S.; Ağar, M.; Çakmak, M.; Koç, M.; Toğaçar, M. Detection of Aspergilloma Disease Using Feature-Selection-Based Vision Transformers. Diagnostics 2025, 15, 26. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Deng, Y.; Zheng, Y.; Chattopadhyay, P.; Wang, L. Vision Transformers for Image Classification: A Comparative Survey. Technologies 2025, 13, 32. [Google Scholar] [CrossRef]
- Dinh, M.-T.; Choi, D.-J.; Lee, G.-S. DenseTextPVT: Pyramid Vision Transformer with Deep Multi-Scale Feature Refinement Network for Dense Text Detection. Sensors 2023, 23, 5889. [Google Scholar] [CrossRef]
- Yüksel, N.; Börklü, H.R. A Generative Deep Learning Approach for Improving the Mechanical Performance of Structural Components. Appl. Sci. 2024, 14, 3564. [Google Scholar] [CrossRef]
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. arxiv 2020, arXiv:2003.08934. [Google Scholar] [CrossRef]
- Aggarwal, A.; Mittal, M.; Battineni, G. Generative Adversarial Network: An Overview of Theory and Applications. Int. J. Inf. Manag. Data Insights 2021, 1, 100004. [Google Scholar] [CrossRef]
- Abedi, M.; Hempel, L.; Sadeghi, S.; Kirsten, T. GAN-Based Approaches for Generating Structured Data in the Medical Domain. Appl. Sci. 2022, 12, 7075. [Google Scholar] [CrossRef]
- Bai, Y.; Dong, C.; Wang, C.; Yuan, C. PS-NeRV: Patch-Wise Stylized Neural Representations for Videos. In Proceedings of the 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8 October 2023; IEEE: New York, NY, USA, 2023; pp. 41–45. [Google Scholar]
- Ji, J.; Fu, S.; Man, J. I-NeRV: A Single-Network Implicit Neural Representation for Efficient Video Inpainting. Mathematics 2025, 13, 1188. [Google Scholar] [CrossRef]
- Ali, A.; Khan, Z.; Aldahmani, S. Centroid Decision Forest. arxiv 2020, arXiv:2503.19306. [Google Scholar]
- Ghosh, T.; Kirby, M. Nonlinear Feature Selection Using Sparsity-Promoted Centroid-Encoder. Neural Comput. Appl. 2023, 35, 21883–21902. [Google Scholar] [CrossRef]
- Gür, Y.E.; Toğaçar, M.; Solak, B. Integration of CNN Models and Machine Learning Methods in Credit Score Classification: 2D Image Transformation and Feature Extraction. Comput. Econ. 2025, 65, 2991–3035. [Google Scholar] [CrossRef]
- Aslan, S. A Deep Learning-Based Sentiment Analysis Approach (MF-CNN-BILSTM) and Topic Modeling of Tweets Related to the Ukraine–Russia Conflict. Appl. Soft Comput. 2023, 143, 110404. [Google Scholar] [CrossRef]
- Başaran, E.; Çelik, Y. Skin Cancer Diagnosis Using CNN Features with Genetic Algorithm and Particle Swarm Optimization Methods. Trans. Inst. Meas. Control 2024, 46, 2706–2713. [Google Scholar] [CrossRef]
- Çalışkan, A. Detecting Human Activity Types from 3D Posture Data Using Deep Learning Models. Biomed. Signal Process. Control 2023, 81, 104479. [Google Scholar] [CrossRef]
- Yildirim, M.; Cengil, E.; Eroglu, Y.; Cinar, A. Detection and Classification of Glioma, Meningioma, Pituitary Tumor, and Normal in Brain Magnetic Resonance Imaging Using Deep Learning-Based Hybrid Model. Iran J. Comput. Sci. 2023, 6, 455–464. [Google Scholar] [CrossRef]
- Şener, A.; Doğan, G.; Ergen, B. A Novel Convolutional Neural Network Model with Hybrid Attentional Atrous Convolution Module for Detecting the Areas Affected by the Flood. Earth Sci. Inform. 2024, 17, 193–209. [Google Scholar] [CrossRef]
- Aktas, A.; Cap, T.; Serbes, G.; Ilhan, H.O.; Uzun, H. Advanced Multi-Level Ensemble Learning Approaches for Comprehensive Sperm Morphology Assessment. Diagnostics 2025, 15, 1564. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Cai, B.; Wang, B.; Lv, Y.; He, W.; Xie, X.; Hou, D. Differentiating Malignant Pleural Mesothelioma and Metastatic Pleural Disease Based on a Machine Learning Model with Primary CT Signs: A Multicentre Study. Heliyon 2022, 8, e11383. [Google Scholar] [CrossRef] [PubMed]
- Fekri-Ershad, S.; Dehkordi, K.B. A Flexible Multi-Channel Deep Network Leveraging Texture and Spatial Features for Diagnosing New COVID-19 Variants in Lung CT Scans. Tomography 2025, 11, 99. [Google Scholar] [CrossRef]
Characteristic | Category/Unit | Patients (N, %) |
---|---|---|
Gender | Male Female | 98 74 |
Age | Year, mean ± SD | 56.8 ± 14.6 |
Stage | 1 2 3 4 | 15 (8.7%) 42 (24.4%) 68 (39.5%) 47 (27.3%) |
Histological type | Epithelioid Biphasic Sarcomatoid Not determined | 133 (77.3%) 16 (9.3%) 15 (8.7%) 8 (4.7%) |
Asbestos exposure | No Yes | 41 (23.8%) 131 (76.2%) |
Smoking | No Yes | 75 (43.6%) 97 (56.4%) |
Pain | No Yes | 70 (40.7%) 102 (59.3%) |
Weight loss | No Yes | 57 (33.1%) 115 (66.9%) |
Model (Type)/Method | Parameter | Preference/Value |
---|---|---|
CaiT and PVT-v2 | Loss function | Cross Entropy |
Learning rate | 10−4 | |
CaiT model type | cait_s24_224 | |
PVT-v2 model type | pvt_v2_b3 | |
Optimization | Adam | |
Input size | 224 × 224 | |
Epoch | 15 | |
Mini-batch | 32 | |
Random Seed | 42 (fixed) | |
Learning rate schedule | Constant (no decay) | |
Weight decay | Default value (0.0) | |
Early stopping | Not applied | |
Preprocessing | RandomResizedCrop (224), RandomHorizontalFlip (), ToTensor (), Normalize (mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225]) | |
Training rate: testing rate | 0.7:0.3 | |
SVM | Kernel function | Cubic |
Kernel scale | Auto | |
Box constraint level | 1 | |
Multiclass method | One-vs-One | |
SAM | Model type | vit_h |
ResNet-18 (used with SVM) | Loss function | Cross Entropy |
Optimization | Adam | |
Pre-trained weights | ImageNet | |
Fine-tuning | Last FC layers | |
Epoch | 15 | |
Mini-batch | 32 | |
Input size | 224 × 224 |
Transformer Model | Dataset | Se | Sp | Pre | F-Scr | Acc |
---|---|---|---|---|---|---|
CaiT | Original | 85.47 | 85.47 | 85.50 | 85.47 | 85.48 |
PVT-v2 | Original | 85.81 | 85.81 | 85.82 | 85.81 | 85.81 |
CaiT | Segmented | 93.40 | 93.40 | 93.43 | 93.40 | 93.40 |
PVT-v2 | Segmented | 94.39 | 94.39 | 94.39 | 94.39 | 94.39 |
Model | Image Set | Se | Sp | Pre | F-Scr | Acc |
---|---|---|---|---|---|---|
CaiT | Generated using Decoder | 95.38 | 95.38 | 95.41 | 95.38 | 95.38 |
Generated using GAN | 94.05 | 94.05 | 94.35 | 94.05 | 94.06 | |
Generated using NeRV | 95.37 | 95.37 | 95.46 | 95.38 | 95.38 | |
PVT-v2 | Generated using Decoder | 94.37 | 94.37 | 94.97 | 94.37 | 94.39 |
Generated using GAN | 95.38 | 95.38 | 95.39 | 95.38 | 95.38 | |
Generated using NeRV | 95.38 | 95.38 | 95.39 | 95.38 | 95.38 |
Selection List of the Image Represented by the Discriminative Score in the CaiT Model. | Selection List of the Image Represented by the Discriminative Score in the PVT-v2 Model. |
---|---|
Index,Selected_Method,Label,Image_Path | Index,Selected_Method,Label,Image_Path |
0,decoder,Diseased,…\decoder\Diseased\img_0.png | 0,nerv,Diseased,…\nerv\Diseased\img_0.png |
1,decoder,Diseased,…\decoder\Diseased\img_1.png | 1,nerv,Diseased,…\nerv\Diseased\img_1.png |
2,gan,Diseased,…\gan\Diseased\img_2.png | 2,nerv,Diseased,…\nerv\Diseased\img_2.png |
3,decoder,Diseased,…\decoder\Diseased\img_3.png | 3,nerv,Diseased,…\nerv\Diseased\img_3.png |
4,gan,Diseased,…\gan\Diseased\img_4.png | 4,nerv,Diseased,…\nerv\Diseased\img_4.png |
5,decoder,Diseased,…\decoder\No disease\img_5.png | 5,nerv,Diseased,...\nerv\No disease\img_5.png |
6,gan,Diseased,…\gan\No disease\img_6.png | 6,nerv,Diseased,…\nerv\No disease\img_6.png |
7,gan,Diseased,…\gan\No disease\img_7.png | 7,nerv,Diseased,…\nerv\No disease\img_7.png |
8,decoder,Diseased,…\decoder\Diseased\img_8.png | 8,nerv,Diseased,…\nerv\Diseased\img_8.png |
9,decoder,Diseased,…\decoder\No disease\img_9.png | 9,nerv,Diseased,…\nerv\No disease\img_9.png |
10,decoder,Diseased,…\decoder\No disease\img_10.png | 10,nerv,Diseased,…\nerv\No disease\img_10.png |
11,decoder,Diseased,…\decoder\Diseased\img_11.png | 11,nerv,Diseased,…\nerv\Diseased\img_11.png |
12,decoder,Diseased,…\decoder\No disease\img_12.png | 12,nerv,Diseased,…\nerv\No disease\img_12.png |
13,decoder,Diseased,…\decoder\No disease\img_13.png | 13,nerv,Diseased,…\nerv\No disease\img_13.png |
14,decoder,Diseased,…\decoder\No disease\img_14.png | 14,nerv,Diseased,…\nerv\No disease\img_14.png |
15,decoder,Diseased,…\decoder\Diseased\img_15.png | 15,nerv,Diseased,…\nerv\Diseased\img_15.png |
16,decoder,Diseased,…\decoder\Diseased\img_16.png | 16,nerv,Diseased,…\nerv\Diseased\img_16.png |
17,decoder,Diseased,…\decoder\No disease\img_17.png | 17,nerv,Diseased,…\nerv\No disease\img_17.png |
18,decoder,Diseased,…\decoder\No disease\img_18.png | 18,nerv,Diseased,…\nerv\No disease\img_18.png |
19,decoder,Diseased,…\decoder\No disease\img_19.png | 19,nerv,Diseased,…\nerv\No disease\img_19.png |
20,decoder,Diseased,…\decoder\No disease\img_20.png | 20,nerv,Diseased,…\nerv\No disease\img_20.png |
21,decoder,Diseased,…\decoder\No disease\img_21.png | 21,nerv,Diseased,…\nerv\No disease\img_21.png |
22,decoder,Diseased,…\decoder\Diseased\img_22.png | 22,nerv,Diseased,…\nerv\Diseased\img_22.png |
23,decoder,Diseased,…\decoder\No disease\img_23.png | 23,nerv,Diseased,…\nerv\No disease\img_23.png |
24,decoder,Diseased,…\decoder\No disease\img_24.png | 24,nerv,Diseased,…\nerv\No disease\img_24.png |
25,decoder,Diseased,…\decoder\Diseased\img_25.png | 25,nerv,Diseased,…\nerv\Diseased\img_25.png |
… | … |
993,nerv,No disease,…\nerv\No disease\img_993.png | 993,gan,No disease,…\gan\No disease\img_993.png |
994,nerv,No disease,…\nerv\Diseased\img_994.png | 994,gan,No disease,…\gan\Diseased\img_994.png |
995,nerv,No disease,…\nerv\No disease\img_995.png | 995,gan,No disease,…\gan\No disease\img_995.png |
996,nerv,No disease,…\nerv\Diseased\img_996.png | 996,gan,No disease,…\gan\Diseased\img_996.png |
997,nerv,No disease,…\nerv\No disease\img_997.png | 997,gan,No disease,…\gan\No disease\img_997.png |
998,nerv,No disease,…\nerv\No disease\img_998.png | 998,gan,No disease,…\gan\No disease\img_998.png |
999,nerv,No disease,…\nerv\No disease\img_999.png | 999,gan,No disease,…\gan\No disease\img_999.png |
1000,nerv,No disease,…\nerv\Diseased\img_1000.png | 1000,gan,No disease,...\gan\Diseased\img_1000.png |
1001,nerv,No disease,…\nerv\Diseased\img_1001.png | 1001,gan,No disease,…\gan\Diseased\img_1001.png |
1002,nerv,No disease,…\nerv\Diseased\img_1002.png | 1002,gan,No disease,…\gan\Diseased\img_1002.png |
1003,nerv,No disease,…\nerv\No disease\img_1003.png | 1003,gan,No disease,…\gan\No disease\img_1003.png |
1004,nerv,No disease,…\nerv\No disease\img_1004.png | 1004,gan,No disease,…\gan\No disease\img_1004.png |
1005,nerv,No disease,…\nerv\Diseased\img_1005.png | 1005,gan,No disease,…\gan\Diseased\img_1005.png |
1006,nerv,No disease,…\nerv\Diseased\img_1006.png | 1006,gan,No disease,…\gan\Diseased\img_1006.png |
1007,nerv,No disease,…\nerv\Diseased\img_1007.png | 1007,gan,No disease,…\gan\Diseased\img_1007.png |
Method | Dataset Technique | Which Model Image Set Was Used? | Se | Sp | Pre | F-Scr | Acc |
---|---|---|---|---|---|---|---|
Discriminative Score | Holdout (training: 0.7/test: 0.3) | CaiT-based | 99.67 | 99.67 | 99.67 | 99.67 | 99.67 |
PVT-v2-based | 99.67 | 99.67 | 99.67 | 99.67 | 99.67 | ||
Cross-validation (k = 5) | CaiT-based | 99.70 | 99.70 | 99.70 | 99.70 | 99.70 | |
PVT-v2-based | 99.80 | 99.80 | 99.80 | 99.80 | 99.80 |
Model/Dataset Configuration | AUC (95% CI) | Accuracy (95% CI) | Sensitivity (95% CI) | Specificity (95% CI) | Precision (95% CI) |
---|---|---|---|---|---|
CaiT-based (holdout) | 0.998 (0.995–1.000) | 0.997 (0.990–1.000) | 1.000 (1.000–1.000) | 0.993 (0.979–1.000) | 0.993 (0.979–1.000) |
PVT-v2-based (holdout) | 0.997 (0.991–1.000) | 0.997 (0.990–1.000) | 0.994 (0.978–1.000) | 1.000 (1.000–1.000) | 1.000 (1.000–1.000) |
CaiT-based (cross-validation) | 0.997 (0.993–1.000) | 0.997 (0.993–1.000) | 0.998 (0.994–1.000) | 0.996 (0.990–1.000) | 0.996 (0.990–1.000) |
PVT-v2-based (cross-validation) | 0.998 (0.995–1.000) | 0.998 (0.995–1.000) | 0.998 (0.994–1.000) | 0.998 (0.994–1.000) | 0.998 (0.994–1.000) |
Experiment Setup | Dataset Technique | Acc (%) |
---|---|---|
Proposed Hybrid Model (SAM + generative rendering + DS + SVM) | Holdout (CaiT-based) | 99.67 |
Holdout (PVT-v2-based) | 99.67 | |
Cross-val. (CaiT-based) | 99.70 | |
Cross-val. (PVT-v2-based) | 99.80 | |
Without Generative Rendering (direct logits → SVM) | Holdout | 93.73 |
Cross-val. | 95.05 | |
Simpler Feature Selector (variance threshold instead of DS) | Holdout | 95.71 |
Cross-val. | 96.13 | |
Replacing SVM with Linear Classifier (logistic regression) | Holdout | 95.05 |
Cross-val. | 95.63 |
Paper | Year | Number of Patients | Number of Classes | Model/Method | Acc (%) |
---|---|---|---|---|---|
Kitajima et al. [9] | 2021 | 875 | 2 | 3D-CNN | 77.3 |
Ye Li et al. [34] | 2022 | 397 | 2 | Multivariate logistic regression | 95.5 |
This study | 2025 | 1008 | 2 | SAM and transformers and class-based feature extraction and deep generative and discriminative score | 99.80 (cross-val.) |
99.67 (holdout) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Aydın, S.; Ağar, M.; Çakmak, M.; Toğaçar, M. Diagnosis of Mesothelioma Using Image Segmentation and Class-Based Deep Feature Transformations. Diagnostics 2025, 15, 2381. https://doi.org/10.3390/diagnostics15182381
Aydın S, Ağar M, Çakmak M, Toğaçar M. Diagnosis of Mesothelioma Using Image Segmentation and Class-Based Deep Feature Transformations. Diagnostics. 2025; 15(18):2381. https://doi.org/10.3390/diagnostics15182381
Chicago/Turabian StyleAydın, Siyami, Mehmet Ağar, Muharrem Çakmak, and Mesut Toğaçar. 2025. "Diagnosis of Mesothelioma Using Image Segmentation and Class-Based Deep Feature Transformations" Diagnostics 15, no. 18: 2381. https://doi.org/10.3390/diagnostics15182381
APA StyleAydın, S., Ağar, M., Çakmak, M., & Toğaçar, M. (2025). Diagnosis of Mesothelioma Using Image Segmentation and Class-Based Deep Feature Transformations. Diagnostics, 15(18), 2381. https://doi.org/10.3390/diagnostics15182381