How Resilient Are Deep Learning Models in Medical Image Analysis? The Case of the Moment-Based Adversarial Attack (Mb-AdA)
Abstract
:1. Introduction
- (1)
- Highlight the vulnerability of medical images to attacks and try to explain why some tasks are more vulnerable to attacks than others.
- (2)
- Propose a generalized attack that significantly affects the operation of DL models.
- (3)
- Investigate adversarial training using the proposed attack method as a universal defense method.
- (1)
- Can be characterized as a black box attack since there is no need for any knowledge of the MIA task, the DL model structure, or the dataset used for its training.
- (2)
- Has several degrees of freedom and can be adapted to any DL model and any image resolution.
- (3)
- Its effect is adjustable.
- (4)
- The way it affects DL models is fully explainable, adding clues to how we can get closer to the DL interpretability or explainability goal.
2. Materials and Methods
- The image is transformed in a moment set using a discrete orthogonal image moments family.
- Excluding some moments of specific orders, the image is reconstructed producing an approximation.
- The product can be used as an adversarial example to attack every DL model.
- Read image dimensions N, M.
- Select attack order
- Compute the polynomials using (4) and add them to a matrix.
- Compute normalization factor using (5) and add them to another matrix.
- Compute moment set for every image channel using Equation (2) with the above matrices. Compute the in every iteration of the sum from 1 to the selected order K, using the appropriate values from the polynomial matrix and saving them in a new Kernel matrix.
- Reconstruct the image using the above moment set and the saved Kernel matrix using (3).
3. Results
3.1. Attack Results
3.2. Defence Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Rawat, W.; Wang, Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef] [PubMed]
- Wang, R.; Lei, T.; Cui, R.; Zhang, B.; Meng, H.; Nandi, A.K. Medical Image Segmentation Using Deep Learning: A Survey. IET Image Process. 2022, 16, 1243–1267. [Google Scholar] [CrossRef]
- Zhao, Z.-Q.; Zheng, P.; Xu, S.; Wu, X. Object Detection with Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [Green Version]
- Zheng, C.; Wu, W.; Chen, C.; Yang, T.; Zhu, S.; Shen, J.; Kehtarnavaz, N.; Shah, M. Deep Learning-Based Human Pose Estimation. A Survey. arXiv 2022, arXiv:2012.13392v4. [Google Scholar]
- Ryu, J. A Visual Saliency-Based Neural Network Architecture for No-Reference Image Quality Assessment. Appl. Sci. 2022, 12, 9567. [Google Scholar] [CrossRef]
- Li, Q.; Zhu, J.; Liu, J.; Cao, R.; Li, Q.; Jia, S.; Qiu, G. Deep Learning based Monocular Depth Prediction: Datasets, Methods and Applications. arXiv 2020, arXiv:2011.04123. [Google Scholar]
- Rahim, T.; Usman, M.A.; Shin, S.Y. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput. Med. Imaging Graph. 2020, 85, 101767. [Google Scholar] [CrossRef]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Maliamanis, T.; Papakostas, G. Adversarial computer vision: A current snapshot. In Proceedings of the Twelfth International Conference on Machine Vision (ICMV 2019), Amsterdam, Netherlands, 16–18 November 2019; Osten, W., Nikolaev, D.P., Eds.; SPIE: Amsterdam, The Netherlands, 2020; p. 121. [Google Scholar]
- Schneider, J.; Apruzzese, G. Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike. arXiv 2022, arXiv:220310166. [Google Scholar]
- Yao, Z.; Gholami, A.; Xu, P.; Keutzer, K.; Mahoney, M.W. Trust region based adversarial attack on neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019, Long Beach, CA, USA, 15–20 June 2019; pp. 11350–11359. [Google Scholar]
- Apostolidis, K.D.; Papakostas, G.A. A Survey on Adversarial Deep Learning Robustness in Medical Image Analysis. Electronics 2021, 10, 2132. [Google Scholar] [CrossRef]
- Huq, A.; Pervin, M.T. Analysis of Adversarial Attacks on Skin Cancer Recognition. In Proceedings of the 2020 International Conference on Data Science and Its Applications (ICoDSA), Bandung, Indonesia, 5–6 August 2020; IEEE: Bandung, Indonesia, 2020; pp. 1–4. [Google Scholar]
- Paschali, M.; Conjeti, S.; Navarro, F.; Navab, N. Generalizability vs. Robustness: Investigating Medical Imaging Networks Using Adversarial Examples. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11070, pp. 493–501. ISBN 978-3-030-00927-4. [Google Scholar]
- Li, Y.; Zhang, H.; Bermudez, C.; Chen, Y.; Landman, B.A.; Vorobeychik, Y. Anatomical context protects deep learning from adversarial perturbations in medical imaging. Neurocomputing 2020, 379, 370–378. [Google Scholar] [CrossRef] [PubMed]
- Risk Susceptibility of Brain Tumor Classification to Adversarial Attacks | SpringerLink. Available online: https://link.springer.com/chapter/10.1007/978-3-030-31964-9_17 (accessed on 4 June 2021).
- Cheng, G.; Ji, H. Adversarial Perturbation on MRI Modalities in Brain Tumor Segmentation. IEEE Access 2020, 8, 206009–206015. [Google Scholar] [CrossRef]
- Anand, D.; Tank, D.; Tibrewal, H.; Sethi, A. Self-Supervision vs. Transfer Learning: Robust Biomedical Image Analysis Against Adversarial Attacks. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Iowa City, IA, USA, 2020; pp. 1159–1163. [Google Scholar]
- Hirano, H.; Minagi, A.; Takemoto, K. Universal adversarial attacks on deep neural networks for medical image classification. BMC Med. Imaging 2021, 21, 9. [Google Scholar] [CrossRef]
- Ma, X.; Niu, Y.; Gu, L.; Wang, Y.; Zhao, Y.; Bailey, J.; Lu, F. Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems. Pattern Recognit. 2021, 110, 107332. [Google Scholar] [CrossRef]
- Finlayson, S.G.; Chung, H.W.; Kohane, I.S.; Beam, A.L. Adversarial Attacks Against Medical Deep Learning Systems. arXiv 2018, arXiv:1804.05296. [Google Scholar]
- Shah, A.; Lynch, S.; Niemeijer, M.; Amelon, R.; Clarida, W.; Folk, J.; Russell, S.; Wu, X.; Abramoff, M.D. Susceptibility to misdiagnosis of adversarial images by deep learning based retinal image analysis algorithms. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: Washington, DC, USA, 2018; pp. 1454–1457. [Google Scholar]
- Byra, M.; Styczynski, G.; Szmigielski, C.; Kalinowski, P.; Michalowski, L.; Paluszkiewicz, R.; Ziarkiewicz-Wroblewska, B.; Zieniewicz, K.; Nowicki, A. Adversarial attacks on deep learning models for fatty liver disease classification by modification of ultrasound image reconstruction method. In Proceedings of the 2020 IEEE International Ultrasonics Symposium (IUS), Las Vegas, NV, USA, 7–11 September 2020. [Google Scholar]
- Ozbulak, U.; Van Messem, A.; De Neve, W. Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation. arXiv 2019, arXiv:1907.13124. [Google Scholar]
- Yao, Q.; He, Z.; Lin, Y.; Ma, K.; Zheng, Y.; Zhou, S.K. A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks. arXiv 2021, arXiv:2012.09501. [Google Scholar]
- Kugler, D. Physical Attacks in Dermoscopy: An Evaluation of Robustness for clinical Deep-Learning. In Proceedings of the Medical Imaging with Deep Learning (MIDL), London, UK, 8–10 July 2019; pp. 1–11. [Google Scholar]
- Vatian, A.; Gusarova, N.; Dobrenko, N.; Dudorov, S.; Nigmatullin, N.; Shalyto, A.; Lobantsev, A. Impact of Adversarial Examples on the Efficiency of Interpretation and Use of Information from High-Tech Medical Images. In Proceedings of the 2019 24th Conference of Open Innovations Association (FRUCT), Moscow, Russia, 8–12 April 2019; IEEE: Moscow, Russia, 2019; pp. 472–478. [Google Scholar]
- Chen, L.; Bentley, P.; Mori, K.; Misawa, K.; Fujiwara, M.; Rueckert, D. Intelligent image synthesis to attack a segmentation CNN using adversarial learning. arXiv 2019, arXiv:1909.11167. [Google Scholar]
- Tian, B.; Guo, Q.; Juefei-Xu, F.; Chan, W.L.; Cheng, Y.; Li, X.; Xie, X.; Qin, S. Bias Field Poses a Threat to DNN-based X-ray Recognition. arXiv 2021, arXiv:2009.09247. [Google Scholar]
- Papakostas, G.A. Over 50 years of image moments and moment invariants. Moments Moment Invariants-Theory Appl. 2014, 1, 3–32. [Google Scholar]
- Hu, M.-K. Visual pattern recognition by moment invariants. IEEE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
- Flusser, J.; Zitova, B.; Suk, T. Moments and Moment Invariants in Pattern Recognition; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
- Shu, H.; Luo, L.; Coatrieux, J.L. Derivation of Moments Invariants; Science Gate Publishing: Xanthi, Greece, 2014. [Google Scholar]
- Papakostas, G.A. Improving the recognition performance of moment features by selection. In Feature Selection for Data and Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2015; pp. 305–327. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
- Setiadi, D.R.I.M. PSNR vs. SSIM: Imperceptibility quality assessment for image steganography. Multimed. Tools Appl. 2021, 80, 8423–8444. [Google Scholar] [CrossRef]
- Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Emadi, N.A.; et al. Can AI Help in Screening Viral and COVID-19 Pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
- Aresta, G.; Araújo, T.; Kwok, S.; Chennamsetty, S.S.; Safwan, M.; Alex, V.; Marami, B.; Prastawa, M.; Chan, M.; Donovan, M.; et al. BACH: Grand challenge on breast cancer histology images. Med. Image Anal. 2019, 56, 122–139. [Google Scholar] [CrossRef] [PubMed]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2018, arXiv:1608.06993. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Las Vegas, NV, USA, 2016; pp. 2818–2826. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. arXiv 2019, arXiv:180104381. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. arXiv 2016, arXiv:160207261. [Google Scholar] [CrossRef]
- 2018 Data Science Bowl | Kaggle. Available online: https://www.kaggle.com/c/data-science-bowl-2018/data (accessed on 18 March 2022).
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2015, arXiv:14126572. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2019, arXiv:170606083. [Google Scholar]
- Andriushchenko, M.; Croce, F.; Flammarion, N.; Hein, M. Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 484–501. [Google Scholar]
- Nicolae, M.-I.; Sinn, M.; Tran, M.N.; Buesser, B.; Rawat, A.; Wistuba, M.; Zantedeschi, V.; Baracaldo, N.; Chen, B.; Ludwig, H.; et al. Adversarial Robustness Toolbox v1.0.0. arXiv 2019, arXiv:180701069. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
ATTACK | SSIM | IoU |
---|---|---|
Attack Free | 100 | 75.22 |
Mb-AdA Order 200 | 92.17 | 74.37 |
Mb-AdA Order 140 | 91.83 | 73.80 |
Mb-AdA Order 80 | 88.96 | 69.70 |
Mb-AdA Order 60 | 86.25 | 65.50 |
Mb-AdA Order 50 | 81.73 | 61.47 |
Mb-AdA Order 40 | 74.23 | 57.57 |
FGSM ϵ = 0.01 | 89.84 | 74.10 |
FGSM ϵ = 0.07 | 67.31 | 74.00 |
FGSM ϵ = 0.09 | 54.65 | 74.00 |
FGSM ϵ = 0.12 | 41.57 | 71.90 |
PGD ϵ = 0.01 | 89.73 | 74.12 |
PGD ϵ = 0.05 | 79.40 | 74.16 |
PGD ϵ = 0.07 | 70.45 | 74.15 |
PGD ϵ = 0.09 | 60.88 | 74.12 |
Square Attack ϵ = 0.01 | 90.14 | 74.00 |
Square Attack ϵ = 0.08 | 69.38 | 72.95 |
Square Attack ϵ = 0.1 | 65.00 | 72.28 |
ATTACK | SSIM | DenseNet 201 | Inception V3 | MobileNet V2 | DenseNet 169 | Inception ResNet |
---|---|---|---|---|---|---|
Original | 100 | 81.67 | 71.67 | 85.00 | 71.67 | 74.00 |
Mb-AdA Order 200 | 95.22 | 70.00 | 66.67 | 76.67 | 70.00 | 73.33 |
Mb-AdA Order 180 | 92.18 | 68.33 | 63.33 | 71.67 | 70.00 | 71.67 |
Mb-AdA Order 160 | 88.11 | 65.00 | 63.33 | 75.00 | 71.67 | 66.67 |
Mb-AdA Order 140 | 82.86 | 68.33 | 58.33 | 65.00 | 73.33 | 70.00 |
Mb-AdA Order 100 | 66.88 | 71.67 | 63.33 | 56.67 | 61.67 | 70.00 |
Mb-AdA Order 80 | 56.00 | 56.67 | 58.33 | 53.33 | 46.67 | 56.67 |
FGSM ϵ = 0.03 | 88.26 | 73.33 | 68.33 | 63.33 | 73.33 | 68.33 |
FGSM ϵ = 0.1 | 77.29 | 65.00 | 65.00 | 50.00 | 60.00 | 68.00 |
FGSM ϵ = 0.3 | 42.17 | 45.00 | 53.33 | 35.00 | 46.67 | 54.00 |
PGD ϵ = 0.01 | 89.46 | 75.00 | 66.67 | 76.67 | 75.00 | 76.67 |
PGD ϵ = 0.1 | 83.60 | 71.67 | 75.00 | 60.00 | 66.67 | 73.33 |
PGD ϵ = 0.2 | 65.32 | 63.33 | 58.33 | 40.00 | 55.00 | 66.67 |
PGD ϵ = 0.3 | 56.55 | 53.33 | 55.00 | 30.00 | 36.67 | 51.67 |
Square Attack ϵ = 0.01 | 89.70 | 76.67 | 70.00 | 81.67 | 73.33 | 73.33 |
Square Attack ϵ = 0.05 | 87.30 | 78.33 | 65.00 | 75.00 | 68.33 | 73.33 |
Square Attack ϵ = 0.1 | 80.90 | 66.67 | 56.67 | 60.00 | 65.00 | 75.00 |
Square Attack ϵ = 0.2 | 65.00 | 61.67 | 43.33 | 51.67 | 61.67 | 61.67 |
ATTACK | SSIM | DenseNet 201 | Inception V3 | MobileNet V2 | DenseNet 169 | Inception ResNet |
---|---|---|---|---|---|---|
Original | 100 | 99.18 | 93.97 | 97.81 | 97.26 | 97.26 |
Mb-AdA Order 200 | 99.03 | 98.90 | 93.42 | 91.51 | 93.42 | 96.99 |
Mb-AdA Order 160 | 98.13 | 98.90 | 92.05 | 87.67 | 94.52 | 96.44 |
Mb-AdA Order 120 | 95.26 | 96.90 | 89.04 | 80.55 | 89.59 | 94.25 |
Mb-AdA Order 80 | 91.35 | 82.74 | 81.10 | 56.16 | 69.32 | 86.58 |
Mb-AdA Order 50 | 85.43 | 60.82 | 59.73 | 43.84 | 36.70 | 61.37 |
Mb-AdA Order 40 | 82.00 | 53.97 | 40.55 | 42.47 | 33.15 | 60.00 |
Mb-AdA Order 30 | 77.89 | 40.55 | 34.52 | 40.27 | 32.88 | 54.25 |
FGSM ϵ = 0.01 | 98.71 | 98.36 | 91.23 | 91.78 | 93.42 | 95.34 |
FGSM ϵ = 0.03 | 94.60 | 96.40 | 84.93 | 80.27 | 88.49 | 92.60 |
FGSM ϵ = 0.05 | 84.10 | 90.14 | 73.15 | 53.15 | 70.41 | 84.11 |
FGSM ϵ = 0.07 | 76.35 | 84.03 | 66.11 | 42.30 | 68.35 | 76.75 |
FGSM ϵ = 0.09 | 64.23 | 73.11 | 57.14 | 39.78 | 64.71 | 69.75 |
PGD ϵ = 0.01 | 98.15 | 98.90 | 91.67 | 90.00 | 94.20 | 95.90 |
PGD ϵ = 0.03 | 93.84 | 96.40 | 83.30 | 75.56 | 82.70 | 87.50 |
PGD ϵ = 0.05 | 86.79 | 91.67 | 76.11 | 55.83 | 75.28 | 77.78 |
PGD ϵ = 0.07 | 77.87 | 85.00 | 65.56 | 49.44 | 68.61 | 70.56 |
PGD ϵ = 0.09 | 68.21 | 76.94 | 59.72 | 46.39 | 63.98 | 60.83 |
Square Attack ϵ = 0.01 | 99.29 | 98.61 | 91.94 | 91.94 | 93.61 | 96.67 |
Square Attack ϵ = 0.05 | 88.00 | 90.28 | 72.22 | 61.39 | 83.89 | 92.50 |
Square Attack ϵ = 0.07 | 82.59 | 85.56 | 58.06 | 55.83 | 77.50 | 91.39 |
Square Attack ϵ = 0.09 | 76.39 | 79.72 | 46.67 | 58.06 | 70.28 | 85.00 |
ATTACK | Normal Training Set | Augmented Orders 20–200 | Augmented Orders 100–200 |
---|---|---|---|
Original | 85 | 88.33 | 81.6 |
Mb-AdA Order 200 | 76.67 | 90 | 73.33 |
Mb-AdA Order 180 | 71.67 | 90 | 71.67 |
Mb-AdA Order 160 | 75 | 86.67 | 75 |
Mb-AdA Order 140 | 65 | 85 | 73.33 |
Mb-AdA Order 120 | 65 | 88.33 | 76.67 |
Mb-AdA Order 110 | 65 | 85 | 80 |
Mb-AdA Order 100 | 56.67 | 86.67 | 75 |
Mb-AdA Order 80 | 53.33 | 81.67 | 76.67 |
Mb-AdA Order 60 | 45 | 70 | 63.33 |
Mb-AdA Order 50 | 45 | 68.33 | 68.33 |
FGSM ϵ = 0.03 | 63.33 | 90 | 76.67 |
FGSM ϵ = 0.05 | 60 | 88.33 | 58.33 |
FGSM ϵ = 0.1 | 50 | 71.67 | 61.67 |
PGD ϵ = 0.01 | 76.67 | 88.33 | 76.67 |
PGD ϵ = 0.1 | 60 | 81.67 | 70 |
PGD ϵ = 0.3 | 30 | 58.33 | 35 |
Square Attack ϵ = 0.02 | 80 | 86.67 | 85 |
Square Attack ϵ = 0.05 | 75 | 85 | 70 |
Square Attack ϵ = 0.1 | 60 | 68.33 | 75 |
Square Attack ϵ = 0.2 | 51.67 | 60 | 36.33 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Maliamanis, T.V.; Apostolidis, K.D.; Papakostas, G.A. How Resilient Are Deep Learning Models in Medical Image Analysis? The Case of the Moment-Based Adversarial Attack (Mb-AdA). Biomedicines 2022, 10, 2545. https://doi.org/10.3390/biomedicines10102545
Maliamanis TV, Apostolidis KD, Papakostas GA. How Resilient Are Deep Learning Models in Medical Image Analysis? The Case of the Moment-Based Adversarial Attack (Mb-AdA). Biomedicines. 2022; 10(10):2545. https://doi.org/10.3390/biomedicines10102545
Chicago/Turabian StyleMaliamanis, Theodore V., Kyriakos D. Apostolidis, and George A. Papakostas. 2022. "How Resilient Are Deep Learning Models in Medical Image Analysis? The Case of the Moment-Based Adversarial Attack (Mb-AdA)" Biomedicines 10, no. 10: 2545. https://doi.org/10.3390/biomedicines10102545