Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images
Abstract
:1. Introduction
2. Scientific Data and Dataset
2.1. Types of Images and Datasets in the Medical Domain
2.2. Types of Images and Medical Data Used for Diagnosis–Classification of Diseases in Medical Images
2.3. Types of Images and Medical Data Used for Diagnosis Detection of Lesions and Abnormalities in Medical Images
2.4. Types of Images and Medical Data Used for Diagnosis–Segmentation into Medical Images
2.5. Medical Data and Manual Features Used for Image Reconstruction
2.6. Medical Data and Manual Features Used for Image Recovery
2.7. Medical Data Used to Generate Medical Reports
3. DL Models Description and Classification According to the Tasks in Medical Images Analyses
3.1. DL Architectures Designed for Diagnosis–Classification in Medical Images
3.2. DL Architectures Designed for Diagnosis Detection of Lesions, Abnormalities in Medical Images
3.3. DL Architectures Designed for Diagnosis Segmentation of Medical Images
3.3.1. FCN Achieves Goals of Segmenting the Medical Image with Good Results
3.3.2. U-Net-Based Models
3.3.3. GAN-Based Models
3.4. DL Architectures Designed for Diagnosis, Classification, Segmentation, Detection and Reconstruction of Medical Images
3.5. Medical Applications of DL Models According to the Scope for Which They Were Used, Classification, Segmentation, Detection and Reconstruction of Medical Images
4. DL Model Description and Classification According to Medical Data Types Used, Objectives and Performances in Medical Applications
4.1. DL Models According to the Characteristics and Tasks for Which They Were Designed
- Recurrent Neural Networks (RNN), Long short-term memory (LSTM), Gated Recurrent Unit (GRU),
- Convolutional Neural Network (CNN)
- Generative Adversarial Network (GAN).
- Deep Network of Beliefs (DBN),
- Deep Transfer Network (DTN),
- Tensor Deep Stack Networks (TDSN),
- Autoencoders (AE).
4.2. Combinations of Different DL Models Depending on the Type of Data Involved in the Problem to Be Solved
4.3. Combinations of Different DL Models to Benefit from the Characteristics of Each Model with Medical Applications Are: CNN + RNN, AE + CNN and GAN + CNN
4.4. Applications in Medicine and the Performance of DL Models Depending on the Therapeutic Areas in Which They Were Used
- diagnosis of cancer by using CNN with different number of layers [145],
- studying deep learning optimization methods and applying in the analysis of medical images [146],
- development of techniques used for endoscopic navigation [147],
- diagnosis in degenerative disorder using deep learning techniques [152] and,
- detection of cancer by processing medical images using the medium change filter technique [153],
- classification of cancer using histopathological images and highlighting the rapidity of Theano, superior tensor flow [153],
- development of two-channel computational algorithms using DL (segmentation, extraction of characteristics, selection of characteristics and classification and classification, extraction of high-level captures respectively) [154],
- malaria detection using a deep neural network (MM-ResNet) [155].
5. Description of Methods for Incorporating Data Types and the Applications in Which They Are Used
5.1. Schematically Present the Methods of Knowledge Incorporation and the Types of Data Used for DL Objectives in the Interpretation of Medical Images
5.2. Classification in Medical Images
5.2.1. Methods of Incorporating Information
5.2.2. Methods of Incorporation of Medical Data from Doctors for Diagnosis and Classification
- paternal training,
- paternal diagnosis,
- target regions,
- hand crafted features (appearance, structures, shapes),
- related diagnostic information
- other types of diagnostic-related information
5.3. Detection in Medical Images
- paternal training,
- paternal diagnosis,
- target regions,
- hand crafted features (appearance, structures, shapes).
5.3.1. Paternal Training Is the Resolution of Tasks with Increasing Difficulties That Use Curricular Learning to Identify and Locate Lesions in Medical Images
5.3.2. Paternal Diagnosis
- Combine images in different settings (brightness and contrast),
- Uses bilateral, transverse, adjacent images,
- Radiologists combine collected images in different settings (brightness and contrast) to locate lesions by visual interpretation of CT images. In the same way is built a model with multi-viewing features (FPN) brightness and contrast, combined later using an attention module that identifies the position with an increase in accuracy compared to NIH DeepLesion [167].
- Bilateral information is compared by radiologists when interpreting images.
5.3.3. Handmade Characteristics
5.3.4. Target Regions
5.4. Segmentation of Lesions and Organs into Medical Images
5.4.1. Incorporation of Data from Natural Datasets or Medical Data Sets
5.4.2. Incorporation of Knowledge from Doctors
- incorporating the characteristics of the lesions in the post-processing stage,
- incorporating the characteristics of the lesions as elements of regularization in the loss function,
- learning the characteristics of the lesion through generational models.
5.4.3. Incorporation Handmade Characteristics from Doctors
5.5. Reconstruction of Medical Image
5.6. Recovery of Medical Image
5.7. Generating Medical Reports
5.8. Applications in Medicine, Methods of Incorporation of Types of Data, Datasets and Their Correlation
6. Conclusions
- Updated presentation of data types, DL models used in medical image analysis;
- Correlation and contribution to the performance of DL models of the constituent elements: data type, incorporation methods and DL architectures;
- Features and “key” tasks of DL models for the successful completion of tasks in applications in the interpretation of medical images.
7. Research Problems
- identification and automatic extraction and standardization of specific medical terms,
- representation of medical knowledge,
- incorporation of medical knowledge.
- Problems in medical image analysis are related to:
- medical images provided as data for deep-street models require: quality, volume, specificity, labelling.
- providing data from doctors, descriptive data, labels are ambiguous for the same medical and non-standard references.
- laborious time in data processing are problems to solve in the future.
8. Future Challenges
- the establishment of a federation institution integrating scientific data and products specific to the field;
- value categorization of industry-specific achievements;
- launching challenges to be developed and completed;
- facilitating the free circulation of discoveries, methods, formulas of scientific products within this federation institution;
- establishing the board of the federation institution through the input and integration of “consequential brains” in the field;
- the creation of a Hub of Ideas under coordination within the federation board with assignment of themes for development on specific teams;
- joint effort for an idea launched within the federation institution;
- an inventory of functional applications and methods, performing in the specific field;
- the creation of a financing system to support and implement ideas specific to the field;
- integration of researchers with notable ideas and performance limited funding or access to knowledge by belonging to geographical areas or institutions under represented internationally in the specific field.
Funding
Conflicts of Interest
References
- Haskins, G.; Kruger, U.; Yan, P. Deep Learning in Medical Image Registration: A Survey. J. Mach. Vis. Appl. 2020, 31, 1–8. [Google Scholar] [CrossRef] [Green Version]
- Prevedello, L.M.; Halabi, S.S.; Shih, G.; Wu, C.C.; Kohli, M.D.; Chokshi, F.H.; Erickson, B.J.; Kalpathy-Cramer, J.; Andriole, K.P.; Flanders, A.E. Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. Radiol. Artif. Intell. 2019, 1, e180031. [Google Scholar] [CrossRef] [PubMed]
- Cheimariotis, G.A.; Riga, M.; Toutouzas, K.; Tousoulis, D.; Katsaggelos, A.; Maglaveras, N. Deep Learning Method to Detect Plaques in IVOCT Images. In Future Trends in Biomedical and Health Informatics and Cybersecurity in Medical Devices: Proceedings of the International Conference on Biomedical and Health Informatics, ICBHI 2019; Lin, K.-P., Magjarevic, R., de Carvalho, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 74, pp. 389–395. [Google Scholar] [CrossRef]
- Halevy, A.; Norvig, P.; Pereira, F. The Unreasonable Effectiveness of Data. IEEE Intell. Syst. 2009, 24, 8–12. [Google Scholar] [CrossRef]
- Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
- Tan, J.; Huo, Y.; Liang, Z.; Li, L. Expert knowledge-infused deep learning for automatic lung nodule detection. J. Xray Sci. Technol. 2019, 27, 17–35. [Google Scholar] [CrossRef]
- Majtner, T.; Yildirim, S.Y.; Hardeberg, J. Combining deep learning and hand-crafted features for skin lesion classification. In Proceedings of the 2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA), Oulu, Finland, 12–15 December 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Hussein, S.; Cao, K.; Song, Q.; Bagci, U. Risk Stratification of Lung Nodules Using 3D CNN-Based Multi-task Learning. arXiv 2017, arXiv:1704.08797. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, E.; Szegedy, C.; Reed, S.; Fu, C.-F.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; ECCV 2016 Lecture Notes in Computer Science; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; Volume 9905. [Google Scholar] [CrossRef] [Green Version]
- Liao, Q.; Ding, Y.; Jiang, Z.L.; Wang, X.; Zhang, C.; Zhang, Q. Multi-task deep convolutional neural network for cancer diagnosis. Neurocomputing 2019, 348, 66–73. [Google Scholar] [CrossRef]
- Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; Paluri, M. Learning Spatiotemporal Features with 3D Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 4489–4497. [Google Scholar] [CrossRef] [Green Version]
- Chen, S.; Ma, K.; Zheng, Y. Med3D: Transfer Learning for 3D Medical Image Analysis. arXiv 2019, arXiv:abs/1904.00625. [Google Scholar]
- Valindria, V.; Pawlowski, N.; Rajchl, M.; Lavdas, I.; Aboagye, E.O.; Rockall, A.G.; Rueckert, D.; Glocker, B. Multi-modal Learning from Unpaired Images: Application to Multi-organ Segmentation in CT and MRI. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV),Lake Tahoe, NV, USA, 12–15 March 2018; IEEE: Piscataway, NY, USA, 2018; pp. 547–556. [Google Scholar] [CrossRef] [Green Version]
- Qin, C.; Schlemper, J.; Caballero, J.; Price, A.N.; Hajnal, J.V.; Rueckert, D. Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans. Med. Imaging 2019, 38, 280–290. [Google Scholar] [CrossRef] [Green Version]
- Schlemper, J.; Caballero, J.; Hajnal, J.V.; Price, A.N.; Rueckert, D. A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 491–503. [Google Scholar] [CrossRef] [Green Version]
- Yang, D.; Xu, D.; Zhou, S.K.; Georgescu, B.; Chen, M.; Grbic, S.; Metaxas, D.; Comaniciu, D. Automatic Liver Segmentation Using an Adversarial Image-to-Image Network. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017, Quebec City, QC, Canada, 11–13 September 2017; MICCAI 2017 Lecture Notes in Computer Science; Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D., Duchesne, S., Eds.; Springer: Cham, Switzerland, 2017; Volume 10435. [Google Scholar] [CrossRef] [Green Version]
- Ben Yedder, H.; Shokoufi, M.; Cardoen, B.; Golnaraghi, F.; Hamarneh, G. Limited-Angle Diffuse Optical Tomography Image Reconstruction Using Deep Learning. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019; MICCAI 2019 Lecture Notes in Computer Science; Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Eds.; Springer: Cham, Switzerland, 2019; Volume 11764. [Google Scholar] [CrossRef]
- Xie, X.; Niu, J.; Liu, X.; Chen, Z.; Tang, S.; Yu, S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med. Image Anal. 2021, 69, 101985. [Google Scholar] [CrossRef]
- Dar, S.U.; Yurt, M.; Shahdloo, M.; Ildız, M.E.; Çukur, T. Synergistic Reconstruction and Synthesis via Generative Adversarial Networks for Accelerated Multi-Contrast MRI. arXiv 2018, arXiv:1805.10704v1. [Google Scholar]
- Huang, G.; Liu, Z.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 2019, arXiv:1506.01497v3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.B. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef] [Green Version]
- Ben-Cohen, A.; Klang, E.; Raskin, S.P.; Soffer, S.; Ben-Haim, S.; Konen, E.; Amitai, M.M.; Greenspan, H. Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection. Eng. Appl. Artif. Intell. 2019, 186–194. [Google Scholar] [CrossRef] [Green Version]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar] [CrossRef] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; MICCAI 2015 Lecture Notes in Computer Science; Navab, N., Hornegger, J., Wells, W., Frangi, A., Eds.; Springer: Cham, Switzerland, 2015; Volume 9351. [Google Scholar] [CrossRef] [Green Version]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2 (NIPS’14), Bangkok, Thailand, 18–22 November 2020; MIT Press: Cambridge, MA, USA, 2020; pp. 2672–2680. [Google Scholar]
- Gibson, E.; Giganti, F.; Hu, Y.; Bonmati, E.; Bandula, S.; Gurusamy, K.; Davidson, B.R.; Pereira, S.P.; Clarkson, M.J.; Barratt, D.C. Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017; MICCAI 2017 Lecture Notes in Computer Science; Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D., Duchesne, S., Eds.; Springer: Cham, Switzerland, 2017; Volume 10433. [Google Scholar] [CrossRef]
- Christ, P.F.; Elshaer, M.E.A.; Ettlinger, F.; Tatavarty, S.; Bickel, M.; Bilic, P.; Rempfler, M.; Armbruster, M.; Hofmann, F.; D’Anastasi, M.; et al. Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016; MICCAI 2016 Lecture Notes in Computer Science; Ourselin, S., Joskowicz, L., Sabuncu, M., Unal, G., Wells, W., Eds.; Springer: Cham, Switzerland, 2016; Volume 9901. [Google Scholar] [CrossRef] [Green Version]
- Kamnitsas, K.; Ledig, C.; Newcombe, V.F.; Simpson, J.P.; Kane, A.D.; Menon, D.K.; Rueckert, D.; Glocker, B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 2017, 36, 61–78. [Google Scholar] [CrossRef]
- Yang, X.; Yu, L.; Wu, L.; Wang, Y.; Ni, D.; Qin, J.; Heng, P.-A. Fine-Grained Recurrent Neural Networks for Automatic Prostate Segmentation in Ultrasound Images. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. Available online: https://ojs.aaai.org/index.php/AAAI/article/view/10761 (accessed on 29 July 2021).
- Zhou, Z.; Siddiquee, M.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, Québec City, QC, Canada, 14 September 2017; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar] [CrossRef] [Green Version]
- Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.; Asari, V. Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation. arXiv 2018, arXiv:abs/1802.06955. [Google Scholar]
- Gordienko, Y.; Gang, P.; Hui, J.; Zeng, W.; Kochura, Y.; Alienin, O.; Rokovyi, O.; Stirenko, S. Deep Learning with Lung Segmentation and Bone Shadow Exclusion Techniques for Chest X-Ray Analysis of Lung Cancer; Advances in Computer Science for Engineering and Education, Advances in Intelligent Systems and Computing; Springer: Cham, Germany, 2018; Volume 754, pp. 638–647. [Google Scholar] [CrossRef] [Green Version]
- Sathya, R.; Abraham, A. Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification. IJARAI 2013, 2. [Google Scholar] [CrossRef] [Green Version]
- Nogales, A.; García-Tejedor, Á.J.; Monge, D.; Vara, J.S.; Antón, C. A survey of deep learning models in medical therapeutic areas. Artif. Intell. Med. 2021, 112, 102020. [Google Scholar] [CrossRef]
- Pesteie, M.; Abolmaesumi, P.; Rohling, R.N. Adaptive Augmentation of Medical Data Using Independently Conditional Variational Auto-Encoders. IEEE Trans. Med. Imaging 2019, 38, 2807–2820. [Google Scholar] [CrossRef] [PubMed]
- Piccialli, F.; di Somma, V.; Giampaolo, F.; Cuomo, S.; Fortino, G. A survey on deep learning in medicine: Why, how and when? Inf. Fusion 2021, 66, 111–137. [Google Scholar] [CrossRef]
- Kooi, T.; Litjens, G.; van Ginneken, B.; Gubern-Mérida, A.; Sánchez, C.I.; Mann, R.; den Heeten, A.; Karssemeijer, N. Large scale deep learning for computer aided detection of mammographic lesions. Med. Image Anal. 2017, 35, 303–312. [Google Scholar] [CrossRef]
- Suk, H.I.; Lee, S.W.; Shen, D. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. Neuroimage 2014, 101, 569–582. [Google Scholar] [CrossRef] [Green Version]
- Leibig, C.; Allken, V.; Ayhan, M.S.; Berens, P.; Wahl, S. Leveraging uncertainty information from deep neural networks for disease detection. Sci. Rep. 2017, 7, 17816. [Google Scholar] [CrossRef] [Green Version]
- Fang, L.; Wang, C.; Li, S.; Rabbani, H.; Chen, X.; Liu, Z. Attention to Lesion: Lesion-Aware Convolutional Neural Network for Retinal Optical Coherence Tomography Image Classification. IEEE Trans. Med. Imaging 2019, 38, 1959–1970. [Google Scholar] [CrossRef] [PubMed]
- Kim, E.K.; Kim, H.E.; Han, K.; Kang, B.J.; Sohn, Y.M.; Woo, O.H.; Lee, C.W. Applying Data-driven Imaging Biomarker in Mammography for Breast Cancer Screening: Preliminary Study. Sci. Rep. 2018, 8, 2762. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ghesu, F.C.; Georgescu, B.; Zheng, Y.; Hornegger, J.; Comaniciu, D. Marginal Space Deep Learning: Efficient Architecture for Detection in Volumetric Image Data. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9349. [Google Scholar] [CrossRef]
- Anthimopoulos, M.; Christodoulidis, S.; Ebner, L.; Christe, A.; Mougiakakou, S. Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network. IEEE Trans. Med. Imaging 2016, 35, 1207–1216. [Google Scholar] [CrossRef] [PubMed]
- Van Grinsven, M.J.; van Ginneken, B.; Hoyng, C.B.; Theelen, T.; Sanchez, C.I. Fast Convolutional Neural Network Training Using Selective Data Sampling: Application to Hemorrhage Detection in Color Fundus Images. IEEE Trans. Med. Imaging 2016, 35, 1273–1284. [Google Scholar] [CrossRef]
- Cho, K.; Van Merriënboer, B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. arXiv 2014, preprint. arXiv:1409.1259. [Google Scholar]
- Lee, J.; Nishikawa, R.M. Automated mammographic breast density estimation using a fully convolutional network. Med. Phys. 2018, 45, 1178–1190. [Google Scholar] [CrossRef] [PubMed]
- Esses, S.J.; Lu, X.; Zhao, T.; Shanbhogue, K.; Dane, B.; Bruno, M.; Chandarana, H. Automated image quality evaluation of T2-weighted liver MRI utilizing deep learning architecture. J. Magn. Reson. Imaging 2018, 47, 723–728. [Google Scholar] [CrossRef]
- Quellec, G.; Charrière, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep image mining for diabetic retinopathy screening. Med. Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Saha, S.K.; Fernando, B.; Cuadros, J.; Xiao, D.; Kanagasingam, Y. Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine. J. Digit. Imaging 2018, 31, 869–878. [Google Scholar] [CrossRef]
- Chaudhari, A.S.; Fang, Z.; Kogan, F.; Wood, J.; Stevens, K.J.; Gibbons, E.K.; Lee, J.H.; Gold, G.E.; Hargreaves, B.A. Super-resolution musculoskeletal MRI using deep learning. Magn. Reson. Med. 2018, 80, 2139–2154. [Google Scholar] [CrossRef] [PubMed]
- Xie, Y.; Zhang, J.; Xia, Y.; Fulham, M.; Zhang, Y. Fusing texture, shape and deep model-learned information at decision level for automated classification of lung nodules on chest, C.T. Inf. Fusion 2018, 42, 102–110. [Google Scholar] [CrossRef]
- Sujit, S.J.; Coronado, I.; Kamali, A.; Narayana, P.A.; Gabr, R.E. Automated image quality evaluation of structural brain MRI using an ensemble of deep learning networks. J. Magn. Reson. Imaging 2019, 50, 1260–1267. [Google Scholar] [CrossRef] [PubMed]
- Xie, Y.; Xia, Y.; Zhang, J.; Song, Y.; Feng, D.; Fulham, M.; Cai, W. Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest, C.T. IEEE Trans. Med. Imaging 2019, 38, 991–1004. [Google Scholar] [CrossRef]
- Rezaei, S.; Emami, A.; Zarrabi, H.; Rafiei, S.; Najarian, K.; Karimi, N.; Samavi, S.; Soroushmehr, S.R. Gland Segmentation in Histopathology Images Using Deep Networks and Handcrafted Features. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 1031–1034. [Google Scholar] [CrossRef] [Green Version]
- Cheng, J.Z.; Ni, D.; Chou, Y.H.; Qin, J.; Tiu, C.M.; Chang, Y.C.; Huang, C.S.; Shen, D.; Chen, C.M. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Sci. Rep. 2016, 6, 24454. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Azizi, S.; Mousavi, P.; Yan, P.; Tahmasebi, A.; Kwak, J.T.; Xu, S.; Turkbey, B.; Choyke, P.; Pinto, P.; Wood, B.; et al. Transfer learning from RF to B-mode temporal enhanced ultrasound features for prostate cancer detection. Int. J. Comput. Assist Radiol. Surg. 2017, 12, 1111–1121. [Google Scholar] [CrossRef]
- Han, X.; Wang, J.; Zhou, W.; Chang, C.; Ying, S.; Shi, J. Deep Doubly Supervised Transfer Network for Diagnosis of Breast Cancer with Imbalanced Ultrasound Imaging Modalities. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2020; MICCAI 2020 Lecture Notes in Computer Science; Springer; Cham, Switzerland, 2020; Volume 12266. [Google Scholar] [CrossRef]
- Maicas, G.; Bradley, A.P.; Nascimento, J.C.; Reid, I.; Carneiro, G. Training Medical Image Analysis Systems like Radiologists. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; MICCAI 2018 Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11070. [Google Scholar] [CrossRef] [Green Version]
- Zhao, R.; Chen, X.; Chen, Z.; Li, S. EGDCL: An Adaptive Curriculum Learning Framework for Unbiased Glaucoma Diagnosis. In Proceedings of the Computer Vision—ECCV 2020; ECCV 2020 Lecture Notes in Computer Science; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer: Cham, Switzerland, 2020; Volume 12366. [Google Scholar] [CrossRef]
- Gonzalez-Diaz, I. DermaKNet: Incorporating the Knowledge of Dermatologists to Convolutional Neural Networks for Skin Lesion Diagnosis. IEEE J. Biomed. Health Inform. 2019, 23, 547–559. [Google Scholar] [CrossRef]
- Yasaka, K.; Akai, H.; Kunimatsu, A.; Abe, O.; Kiryu, S. Liver Fibrosis: Deep Convolutional Neural Network for Staging by Using Gadoxetic Acid-enhanced Hepatobiliary Phase MR Images. Radiology 2018, 287, 146–155. [Google Scholar] [CrossRef]
- Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R.; Yap, M.H.; Pons, G.; et al. Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks. IEEE J. Biomed. Health Inform. 2018, 22, 1218–1226. [Google Scholar] [CrossRef] [Green Version]
- Bar, Y.; Diamant, I.; Wolf, L.; Greenspan, H. Deep learning with non-medical training used for chest pathology identification. In Proceedings of the SPIE Proceedings 9414, Medical Imaging 2015: Computer-Aided Diagnosis, 94140V (20 March 2015), Orlando, FL, USA, 21–26 February 2015. [Google Scholar] [CrossRef]
- Van der Burgh, H.K.; Schmidt, R.; Westeneng, H.J.; de Reus, M.A.; van den Berg, L.H.; van den Heuvel, M.P. Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis. Neuroimage Clin. 2016, 13, 361–369. [Google Scholar] [CrossRef] [PubMed]
- Lee, C.S.; Baughman, D.M.; Lee, A.Y. Deep learning is effective for the classification of OCT images of normal versus Age-related Macular Degeneration. Ophthalmol. Retina 2017, 1, 322–327. [Google Scholar] [CrossRef]
- Cui, H.; Xu, Y.; Li, W.; Wang, L.; Duh, H. Collaborative Learning of Cross-channel Clinical Attention for Radiotherapy-Related Esophageal Fistula Prediction from, CT. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2020, Lima, Peru, 4–8 October 2020; MICCAI 2020 Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12261. [Google Scholar] [CrossRef]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2016, arXiv:609.02907v4. [Google Scholar]
- Wang, J.; Ding, H.; Bidgoli, F.A.; Zhou, B.; Iribarren, C.; Molloi, S.; Baldi, P. Detecting Cardiovascular Disease from Mammograms with Deep Learning. IEEE Trans. Med. Imaging 2017, 36, 1172–1181. [Google Scholar] [CrossRef]
- Iakovidis, D.K.; Georgakopoulos, S.V.; Vasilakakis, M.; Koulaouzidis, A.; Plagianakos, V.P. Detecting and Locating Gastrointestinal Anomalies Using Deep Learning and Iterative Cluster Unification. IEEE Trans. Med. Imaging 2018, 37, 2196–2210. [Google Scholar] [CrossRef]
- Elman, J.L. Finding Structure in Time. Cogn. Sci. 1990, 14, 179–211. [Google Scholar] [CrossRef]
- Xie, X.; Niu, J.; Liu, X.; Li, Q.; Wang, Y.; Han, J.; Tang, S. DG-CNN: Introducing Margin Information into CNN for Breast Cancer Diagnosis in Ultrasound Images. J. Comput. Sci. Technol. 2020, 1. [Google Scholar] [CrossRef]
- Zhang, B.; Wang, Z.; Gao, J.; Rutjes, C.; Nufer, K.; Tao, D.; Feng, D.D.; Menzies, S.W. Short-Term Lesion Change Detection for Melanoma Screening with Novel Siamese Neural Network. IEEE Trans. Med. Imaging 2021, 40, 840–851. [Google Scholar] [CrossRef] [PubMed]
- Du, X.; Kurmann, T.; Chang, P.L.; Allan, M.; Ourselin, S.; Sznitman, R.; Kelly, J.D.; Stoyanov, D. Articulated Multi-Instrument 2-D Pose Estimation Using Fully Convolutional Networks. IEEE Trans. Med. Imaging 2018, 37, 1276–1287. [Google Scholar] [CrossRef] [Green Version]
- Carneiro, G.; Nascimento, J.C.; Freitas, A. The Segmentation of the Left Ventricle of the Heart from Ultrasound Data Using Deep Learning Architectures and Derivative-Based Search Methods. IEEE Trans. Med. Imaging 2012, 21, 968–982. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chen, C.-M.; Huang, Y.-S.; Fang, P.-W.; Liang, C.-W.; Chang, R.-F. A computer-aided diagnosis system for differentiation and delineation of malignant regions on whole-slide prostate histopathology image using spatial statistics and multidimensional DenseNet. Med. Phys. 2020, 47, 1021–1033. [Google Scholar] [CrossRef] [PubMed]
- Xue, Y.; Zhang, R.; Deng, Y.; Chen, K.; Jiang, T. A preliminary examination of the diagnostic value of deep learning in hip osteoarthritis. PLoS ONE 2017, 12, e0178992. [Google Scholar] [CrossRef] [Green Version]
- Li, S.; Wei, J.; Chan, H.P.; Helvie, M.A.; Roubidoux, M.A.; Lu, Y.; Zhou, C.; Hadjiiski, L.M.; Samala, R.K. Computer-aided assessment of breast density: Comparison of supervised deep learning and feature-based statistical learning. Phys. Med. Biol. 2018, 63, 025005. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Wu, S.; Lu, Z.; Shen, Y.; Wang, J.; Huang, P.; Lou, J.; Liu, C.; Xing, L.; Zhang, J.; et al. Hybrid adversarial-discriminative network for leukocyte classification in leukemia. Med. Phys. 2020, 47, 3732–3744. [Google Scholar] [CrossRef] [PubMed]
- Huynh, B.Q.; Li, H.; Giger, M.L. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. J. Med. Imaging 2016, 3, 034501. [Google Scholar] [CrossRef] [PubMed]
- Ning, Z.; Luo, J.; Li, Y.; Han, S.; Feng, Q.; Xu, Y.; Chen, W.; Chen, T.; Zhang, Y. Pattern Classification for Gastrointestinal Stromal Tumors by Integration of Radiomics and Deep Convolutional Features. IEEE J. Biomed. Health Inform. 2019, 23, 1181–1191. [Google Scholar] [CrossRef]
- Choi, H.; Ha, S.; Im, H.J.; Paek, S.H.; Lee, D.S. Refining diagnosis of Parkinson’s disease with deep learning-based interpretation of dopamine transporter imaging. NeuroImage Clin. 2017, 16, 586–594. [Google Scholar] [CrossRef]
- Luo, B.; Shen, J.; Cheng, S.; Wang, Y.; Pantic, M. Shape Constrained Network for Eye Segmentation in the Wild. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA, 1–5 March 2020; pp. 1952–1960. [Google Scholar]
- Wimmer, G.; Hegenbart, S.; Vecsei, A.; Uhl, A. Convolutional Neural Network Architectures for the Automated Diagnosis of Celiac Disease. In Computer-Assisted and Robotic Endoscopy; CARE 2016 Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10170. [Google Scholar] [CrossRef]
- Kim, K.H.; Choi, S.H.; Park, S.H. Improving Arterial Spin Labeling by Using Deep Learning. Radiology 2018, 287, 658–666. [Google Scholar] [CrossRef]
- Song, Y.; Zhang, L.; Chen, S.; Ni, D.; Lei, B.; Wang, T. Accurate Segmentation of Cervical Cytoplasm and Nuclei Based on Multiscale Convolutional Network and Graph Partitioning. IEEE Trans. Biomed. Eng. 2015, 62, 2421–2433. [Google Scholar] [CrossRef]
- Moradi, M.; Gur, Y.; Wang, H.; Prasanna, P.; Syeda-Mahmood, T. A hybrid learning approach for semantic labeling of cardiac CT slices and recognition of body position. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 1418–1421. [Google Scholar] [CrossRef]
- Khosravan, N.; Celik, H.; Turkbey, B.; Jones, E.C.; Wood, B.; Bagci, U. A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learning. Med. Image Anal. 2019, 51, 101–115. [Google Scholar] [CrossRef] [PubMed]
- Betancur, J.; Commandeur, F.; Motlagh, M.; Sharir, T.; Einstein, A.J.; Bokhari, S.; Fish, M.B.; Ruddy, T.D.; Kaufmann, P.; Sinusas, A.J.; et al. Deep Learning for Prediction of Obstructive Disease From Fast Myocardial Perfusion SPECT: A Multicenter Study. JACC Cardiovasc. Imaging 2018, 11, 1654–1663. [Google Scholar] [CrossRef]
- Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [Green Version]
- Mitsuhara, M.; Fukui, H.; Sakashita, Y.; Ogata, T.; Hirakawa, T.; Yamashita, T.; Fujiyoshi, H. Embedding Human Knowledge into Deep Neural Network via Attention Map. arXiv 2019, arXiv:1905.03540v4. [Google Scholar]
- Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef]
- Bakator, M.; Radosav, D. Deep Learning and Medical Diagnosis: A Review of Literature. Multimodal Technol. Interact. 2018, 2, 47. [Google Scholar] [CrossRef] [Green Version]
- Hecht-Nielsen, R. Neurocomputing: Picking the human brain. IEEE Spectr. 1988, 25, 36–41. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; MIT press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar]
- Arasu, A.; Garcia-Molina, H. Extracting Structured Data from Web Pages. In Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data, San Diego, CA, USA, 9–12 June 2003; pp. 337–348. [Google Scholar] [CrossRef]
- Vizcarra, J.; Place, R.; Tong, L.; Gutman, D.; Wang, M.D. Fusion in Breast Cancer Histology Classification. ACM BCB 2019, 2019, 485–493. [Google Scholar] [CrossRef]
- Velicer, W.F.; Molenaar, P.C. Time Series Analysis for Psychological Research. In Handbook of Psychology, 2nd ed.; Weiner, I., Schinka, J.A., Velicer, W.F., Eds.; Wiley: Hoboken, NJ, USA, 2012. [Google Scholar] [CrossRef]
- LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
- Ursuleanu, T.F.; Luca, A.R.; Gheorghe, L.; Grigorovici, R.; Iancu, S.; Hlusneac, M.; Preda, C.; Grigorovici, A. Unified Analysis Specific to the Medical Field in the Interpretation of Medical Images through the Use of Deep Learning. E-Health Telecommun. Syst. Netw. 2021, 10, 41–74. [Google Scholar] [CrossRef]
- Pandey, B.; Pandey, D.K.; Mishra, B.P.; Rhmann, W. A comprehensive survey of deep learning in the field of medical imaging and medical natural language processing: Challenges and research directions. J. King Saud Univ. Comput. Inf. Sci. 2021. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
- Wang, X.; Liang, G.; Zhang, Y.; Blanton, H.; Bessinger, Z.; Jacobs, N. Inconsistent Performance of Deep Learning Models on Mammogram Classification. J. Am. Coll. Radiol. 2020, 17, 796–803. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- McKinney, S.M.; Sieniek, M.; Godbole, V.; Godwin, J.; Antropova, N.; Ashrafian, H.; Back, T.; Chesus, M.; Corrado, G.S.; Darzi, A.; et al. International evaluation of an AI system for breast cancer screening. Nat. Cell Biol. 2020, 577, 89–94. [Google Scholar] [CrossRef]
- Bai, J.; Posner, R.; Wang, T.; Yang, C.; Nabavi, S. Applying deep learning in digital breast tomosynthesis for automatic breast cancer detection: A review. Med. Image Anal. 2021, 71, 102049. [Google Scholar] [CrossRef] [PubMed]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556v6. [Google Scholar]
- León, J.; Escobar, J.J.; Ortiz, A.; Ortega, J.; González, J.; Martín-Smith, P.; Gan, J.Q.; Damas, M. Deep learning for EEG-based Motor Imagery classification: Accuracy-cost trade-off. PLoS ONE 2020, 15, e0234178. [Google Scholar] [CrossRef] [PubMed]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Saravanan, S.; Juliet, S. Deep Medical Image Reconstruction with Autoencoders using Deep Boltzmann Machine Training. EAI Endorsed Trans. Pervasive Health Technol. 2020, 6, e2. [Google Scholar] [CrossRef]
- Gondara, L. Medical Image Denoising Using Convolutional Denoising Autoencoders. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), Barcelona, Spain, 12–15 December 2016; pp. 241–246. [Google Scholar] [CrossRef] [Green Version]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Torralba, A.; Oliva, A. Places: An Image Database for Deep Scene Understanding. arXiv 2016, arXiv:1610.02055v1. [Google Scholar] [CrossRef]
- Nowling, R.J.; Bukowy, J.; McGarry, S.D.; Nencka, A.S.; Blasko, O.; Urbain, J.; Lowman, A.; Barrington, A.; Banerjee, A.; Iczkowski, K.A.; et al. Classification before Segmentation: Improved U-Net Prostate Segmentation. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Yu, E.M.; Iglesias, J.E.; Dalca, A.V.; Sabuncu, M.R. An Auto-Encoder Strategy for Adaptive Image Segmentation. In Proceedings of the Third Conference on Medical Imaging with Deep Learning (PMLR), Montreal, QC, Canada, 6–8 July 2020; Volume 121, pp. 881–891. [Google Scholar]
- Uzunova, H.; Schultz, S.; Handels, H.; Ehrhardt, J. Unsupervised pathology detection in medical images using conditional variational autoencoders. Int. J. CARS 2019, 14, 451–461. [Google Scholar] [CrossRef]
- Chen, M.; Shi, X.; Zhang, Y.; Wu, D.; Guizani, M. Deep Features Learning for Medical Image Analysis with Convolutional Autoencoder Neural Network. IEEE Trans. Big Data 2017, 1. [Google Scholar] [CrossRef]
- Saltz, J.; Gupta, R.; Hou, L.; Kurc, T.; Singh, P.; Nguyen, V.; Samaras, D.; Shroyer, K.R.; Zhao, T.; Batiste, R.; et al. Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images. Cell Rep. 2018, 23, 181–193.e7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rumelhart, D.; Hinton, G.; Williams, R. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Lam, C.; Yu, C.; Huang, L.; Rubin, D. Retinal Lesion Detection with Deep Learning Using Image Patches. Investig. Opthalmol. Vis. Sci. 2018, 59, 590–596. [Google Scholar] [CrossRef]
- Rajaraman, S.; Antani, S.; Poostchi, M.; Silamut, K.; Hossain, A.; Maude, R.; Jaeger, S.; Thoma, G.R. Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images. PeerJ 2018, 6, e4568. [Google Scholar] [CrossRef]
- Nielsen, A.; Hansen, M.B.; Tietze, A.; Mouridsen, K. Prediction of Tissue Outcome and Assessment of Treatment Effect in Acute Ischemic Stroke Using Deep Learning. Stroke 2018, 49, 1394–1401. [Google Scholar] [CrossRef] [PubMed]
- Lee, H.C.; Ryu, H.G.; Chung, E.J.; Jung, C.W. Prediction of Bispectral Index during Target-controlled Infusion of Propofol and Remifentanil: A Deep Learning Approach. Anesthesiology 2018, 128, 492–501. [Google Scholar] [CrossRef]
- Zeng, L.L.; Wang, H.; Hu, P.; Yang, B.; Pu, W.; Shen, H.; Chen, X.; Liu, Z.; Yin, H.; Tan, Q.; et al. Multi-Site Diagnostic Classification of Schizophrenia Using Discriminant Deep Learning with Functional Connectivity MRI. EBioMedicine 2018, 30, 74–85. [Google Scholar] [CrossRef] [Green Version]
- Kooi, T.; Van Ginneken, B.; Karssemeijer, N.; den Heeten, A. Discriminating solitary cysts from soft tissue lesions in mammography using a pretrained deep convolutional neural network. Med. Phys. 2017, 44, 1017–1027. [Google Scholar] [CrossRef] [Green Version]
- Heinsfeld, A.S.; Franco, A.R.; Craddock, R.C.; Buchweitz, A.; Meneguzzi, F. Identification of autism spectrum disorder using deep learning and the ABIDE dataset. NeuroImage Clin. 2018, 17, 16–23. [Google Scholar] [CrossRef]
- Wang, J.; Yang, X.; Cai, H.; Tan, W.; Jin, C.; Li, L. Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning. Sci. Rep. 2016, 6, 27327. [Google Scholar] [CrossRef]
- Fu, H.; Cheng, J.; Xu, Y.; Zhang, C.; Wong, D.W.K.; Liu, J.; Cao, X. Disc-Aware Ensemble Network for Glaucoma Screening From Fundus Image. IEEE Trans. Med. Imaging 2018, 37, 2493–2501. [Google Scholar] [CrossRef] [Green Version]
- Yu, C.; Yang, S.; Kim, W.; Jung, J.; Chung, K.-Y.; Lee, S.W.; Oh, B. Correction: Acral melanoma detection using a convolutional neural network for dermoscopy images. PLoS ONE 2018, 13, e0196621. [Google Scholar] [CrossRef] [Green Version]
- Han, S.S.; Park, G.H.; Lim, W.; Kim, M.S.; Na, J.I.; Park, I.; Chang, S.E. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network. PLoS ONE 2018, 13, e0191493. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hsieh, Y.J.; Tseng, H.C.; Chin, C.L.; Shao, Y.H.; Tsai, T.Y. Based on DICOM RT Structure and Multiple Loss Function Deep Learning Algorithm in Organ Segmentation of Head and Neck Image. In Proceedings of the Future Trends in Biomedical and Health Informatics and Cybersecurity in Medical Devices: Proceedings of the International Conference on Biomedical and Health Informatics, ICBHI 2019, Taipei, Taiwan, 17–20 April 2019; Springer: Berlin/Heidelberg, Germany, 2020; Volume 74, pp. 428–435. [Google Scholar] [CrossRef]
- Ngo, T.A.; Lu, Z.; Carneiro, G. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance. Med. Image Anal. 2017, 35, 159–171. [Google Scholar] [CrossRef] [PubMed]
- Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv 2015, arXiv:1503.02531. [Google Scholar]
- Araújo, T.; Aresta, G.; Castro, E.M.; Rouco, J.; Aguiar, P.; Eloy, C.; Polónia, A.; Campilho, A. Classification of breast cancer histology images using Convolutional Neural Networks. PLoS ONE 2017, 12, e0177544. [Google Scholar] [CrossRef]
- Han, Z.; Wei, B.; Zheng, Y.; Yin, Y.; Li, K.; Li, S. Breast Cancer Multi-classification from Histopathological Images with Structured Deep Learning Model. Sci. Rep. 2017, 7, 4172. [Google Scholar] [CrossRef]
- Zhang, X.; Yu, F.X.; Chang, S.; Wang, S. Deep Transfer Network: Unsupervised Domain Adaptation. arXiv 2015, arXiv:1503.00591v1. [Google Scholar]
- Hutchinson, B.; Deng, L.; Yu, D. Tensor deep stacking networks. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1944–1957. [Google Scholar] [CrossRef] [PubMed]
- Hjelm, R.D.; Fedorov, A.; Lavoie-Marchildon, S.; Grewal, K.; Bachman, P.; Trischler, A.; Bengio, Y. Learning deep representations by mutual information estimation and maximization. arXiv 2018, arXiv:1808.06670. [Google Scholar]
- Alom, M.Z.; Yakopcic, C.; Nasrin, M.S.; Taha, T.M.; Asari, V.K. Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network. J. Digit. Imaging 2019, 32, 605–617. [Google Scholar] [CrossRef] [Green Version]
- Zhao, Y.; Dong, Q.; Zhang, S.; Zhang, W.; Chen, H.; Jiang, X.; Guo, L.; Hu, X.; Han, J.; Liu, T. Automatic Recognition of fMRI-Derived Functional Networks Using 3-D Convolutional Neural Networks. IEEE Trans. Biomed. Eng. 2018, 65, 1975–1984. [Google Scholar] [CrossRef] [PubMed]
- Tiulpin, A.; Thevenot, J.; Rahtu, E.; Lehenkari, P.; Saarakkala, S. Automatic Knee Osteoarthritis Diagnosis from Plain Radiographs: A Deep Learning-Based Approach. Sci. Rep. 2018, 8, 1727. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z.; Liang, X.; Dong, X.; Xie, Y.; Cao, G. A Sparse-View CT Reconstruction Method Based on Combination of DenseNet and Deconvolution. IEEE Trans. Med. Imaging 2018, 37, 1407–1417. [Google Scholar] [CrossRef] [PubMed]
- Luca, A.R.; Ursuleanu, T.F.; Gheorghe, L.; Grigorovici1, R.; Iancu, S.; Hlusneac, M.; Preda, C.; Grigorovici, A. Designing a High-Performance Deep Learning Theoretical Model for Biomedical Image Segmentation by Using Key Elements of the Latest U-Net-Based Architectures. J. Comput. Commun. 2021, 9, 8–20. [Google Scholar] [CrossRef]
- Huang, C.H.; Wu, H.Y.; Lin, Y.L. HarDNet-MSEG: A Simple Encoder-Decoder Polyp Segmentation Neural Network that Achieves Over 0.9 Mean Dice and 86 FPS. arXiv arXiv:2101.07172, 2021.
- Haryanto, T.; Wasito, I.; Suhartanto, H. Convolutional Neural Network (CNN) for gland images classification. In Proceedings of the 2017 11th International Conference on Information & Communication Technology and System (ICTS), Surabaya, Indonesia, 31 October 2017; pp. 55–60. [Google Scholar] [CrossRef]
- Cao, H.; Bernard, S.; Heutte, L.; Sabourin, R. Improve the performance of transfer learning without fine-tuning using dissimilarity-based multi-view learning for breast cancer histology images. In Image Analysis and Recognition; Springer: Cham, Switzerland, 2018; pp. 779–787. [Google Scholar] [CrossRef] [Green Version]
- Luo, X.; Mori, K.; Peters, T.M. Advanced Endoscopic Navigation: Surgical Big Data, Methodology, and Applications. Annu. Rev. Biomed. Eng. 2018, 20, 221–251. [Google Scholar] [CrossRef]
- Xiao, C.; Choi, E.; Sun, J. Opportunities and challenges in developing deep learning models using electronic health records data: A systematic review. J. Am. Med. Inform. Assoc. 2018, 25, 1419–1428. [Google Scholar] [CrossRef]
- Shickel, B.; Tighe, P.J.; Bihorac, A.; Rashidi, P. Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis. IEEE J. Biomed. Heal. Inform. 2018, 22, 1589–1604. [Google Scholar] [CrossRef] [PubMed]
- Karkra, S.; Singh, P.; Kaur, K. Convolution Neural Network: A Shallow Dive in to Deep Neural Net Technology. Int. J. Recent Technol. Eng. 2019, 8, 487–495. [Google Scholar] [CrossRef]
- Ranschaert, E.R.; Morozov, S.; Algra, P.R. Artificial Intelligence in Medical Imaging: Opportunities, Applications and Risks; Springer: Berlin, Germany, 2019. [Google Scholar] [CrossRef]
- Tsang, G.; Xie, X.; Zhou, S.M. Harnessing the Power of Machine Learning in Dementia Informatics Research: Issues, Opportunities, and Challenges. IEEE Rev. Biomed. Eng. 2019, 13, 113–129. [Google Scholar] [CrossRef]
- Haryanto, T.; Suhartanto, H.; Murni, A.; Kusmardi, K. Strategies to Improve Performance of Convolutional Neural Network on Histopathological Images Classification. In Proceedings of the 2019 International Conference on Advanced Computer Science and information Systems (ICACSIS), Bali, Indonesia, 12–13 October 2019; pp. 125–132. [Google Scholar] [CrossRef]
- Das, A.; Nair, M.S.; Peter, S.D. Computer-Aided Histopathological Image Analysis Techniques for Automated Nuclear Atypia Scoring of Breast Cancer: A Review. J. Digit. Imaging 2020, 33, 1–31. [Google Scholar] [CrossRef] [PubMed]
- Pattanaik, P.; Mittal, M.; Khan, M.Z.; Panda, S. Malaria detection using deep residual networks with mobile microscopy. J. King Saud Univ. Comput. Inf. Sci. 2020. [Google Scholar] [CrossRef]
- Das, A.; Rad, P.; Choo, K.-K.R.; Nouhi, B.; Lish, J.; Martel, J. Distributed machine learning cloud teleophthalmology IoT for predicting AMD disease progression. Futur. Gener. Comput. Syst. 2019, 93, 486–498. [Google Scholar] [CrossRef]
- Kim, Y.D.; Noh, K.J.; Byun, S.J.; Lee, S.; Kim, T.; Sunwoo, L.; Lee, K.J.; Kang, S.-H.; Park, K.H.; Park, S.J. Effects of Hypertension, Diabetes, and Smoking on Age and Sex Prediction from Retinal Fundus Images. Sci. Rep. 2020, 10, 4623. [Google Scholar] [CrossRef]
- Apostolopoulos, S.; Ciller, C.; De Zanet, S.; Wolf, S.; Sznitman, R. RetiNet: Automatic AMD identification in OCT volumetric data. Invest. Ophthalmol. Vis. Sci. 2017, 58, 387. [Google Scholar]
- Zhang, J.; Xia, Y.; Wu, Q.; Xie, Y. Classification of Medical Images and Illustrations in the Biomedical Literature Using Synergic Deep Learning. arXiv 2017, arXiv:1706.09092. [Google Scholar]
- Serj, M.F.; Lavi, B.; Hoff, G.; Valls, D.P. A Deep Convolutional Neural Network for Lung Cancer Diagnostic. arXiv 2018, arXiv:1804.08170. [Google Scholar]
- Jang, R.; Kim, N.; Jang, M.; Lee, K.H.; Lee, S.M.; Na Noh, H.; Seo, J.B. Assessment of the Robustness of Convolutional Neural Networks in Labeling Noise by Using Chest X-Ray Images from Multiple Centers. JMIR Med. Inform. 2020, 8, e18089. [Google Scholar] [CrossRef]
- Bengio, Y.; Louradour, J.; Collobert, R.; Weston, J. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning—ICML 2009, Montreal, QC, Canada, 14–18 June 2009; pp. 41–48. [Google Scholar] [CrossRef]
- Guan, Q.; Huang, Y.; Zhong, Z.; Zheng, Z.; Zheng, L.; Yang, Y. Diagnose like a Radiologist: Attention Guided Convolutional Neural Network for Thorax Disease Classification. arXiv 2018, arXiv:1801.09927v1. [Google Scholar]
- Xia, X.; Gong, J.; Hao, W.; Yang, T.; Lin, Y.; Wang, S.; Peng, W. Comparison and Fusion of Deep Learning and Radiomics Features of Ground-Glass Nodules to Predict the Invasiveness Risk of Stage-I Lung Adenocarcinomas in CT Scan. Front. Oncol. 2020, 10, 418. [Google Scholar] [CrossRef] [PubMed]
- Jesson, A.; Guizard, N.; Ghalehjegh, S.H.; Goblot, D.; Soudan, F.; Chapados, N. CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017; MICCAI 2017 Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10435. [Google Scholar] [CrossRef] [Green Version]
- Astudillo, P.; Mortier, P.; De Beule, M.; Wyffels, F. Curriculum Deep Reinforcement Learning with Different Exploration Strategies: A Feasibility Study on Cardiac Landmark Detection. In Proceedings of the 13th International Joint Conference on Biomedical Engineering Systems and Technologies—Bioimaging, Valetta, Malta, 24–26 February 2020; pp. 37–45. [Google Scholar] [CrossRef]
- Li, C.Y.; Liang, X.; Hu, Z.; Xing, E.P. Knowledge-Driven Encode, Retrieve, Paraphrase for Medical Image Report Generation. Proc. Conf. AAAI Artif. Intell. 2019, 33, 6666–6673. [Google Scholar] [CrossRef] [Green Version]
- Ghafoorian, M.; Mehrtash, A.; Kapur, T.; Karssemeijer, N.; Marchiori, E.; Pesteie, M.; Guttmann, C.R.G.; De Leeuw, F.-E.; Tempany, C.M.; Van Ginneken, B.; et al. Transfer Learning for Domain Adaptation in MRI: Application in Brain Lesion Segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10435. [Google Scholar] [CrossRef] [Green Version]
- Jing, B.; Xie, P.; Xing, E. On the Automatic Generation of Medical Imaging Reports. arXiv 2017, arXiv:1711.08195. [Google Scholar]
- Liu, G.; Hsu, T.-M.H.; McDermott, M.; Boag, W.; Weng, W.-H.; Szolovits, P.; Ghassemi, M. Clinically Accurate Chest X-Ray Report Generation. arXiv 2019, arXiv:1904.02633v2. [Google Scholar]
- Gale, W.; Oakden-Rayner, L.; Carneiro, G.; Bradley, A.P.; Palmer, L.J. Producing radiologist-quality reports for interpretable artificial intelligence. arXiv 2018, arXiv:1806.00340v1. [Google Scholar]
- Zhang, Y.; Wei, Y.; Wu, Q.; Zhao, P.; Niu, S.; Huang, J.; Tan, M. Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis. IEEE Trans. Image Process. 2020, 29, 7834–7844. [Google Scholar] [CrossRef]
- Tang, Y.; Wang, X.; Harrison, A.P.; Lu, L.; Xiao, J.; Summers, R.M. Attention-Guided Curriculum Learning for Weakly Supervised Classification and Localization of Thoracic Diseases on Chest Radiographs. In Proceedings of the Machine Learning in Medical Imaging, MLMI 2018; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2018; Volume 11046. [Google Scholar] [CrossRef] [Green Version]
- Jiménez-Sánchez, A.; Mateus, D.; Kirchhoff, S.; Kirchhoff, C.; Biberthaler, P.; Navab, N.; Ballester, M.A.G.; Piella, G. Medical-based Deep Curriculum Learning for Improved Fracture Classification. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019; MICCAI 2019 Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2019; Volume 11769. [Google Scholar] [CrossRef] [Green Version]
- Jiménez-Sánchez, A.; Mateus, D.; Kirchhoff, S.; Kirchhoff, C.; Biberthaler, P.; Navab, N.; Ballester, M.A.; Piella, G. Curriculum learning for annotation-efficient medical image analysis: Scheduling data with prior knowledge and uncertainty. arXiv 2007, arXiv:2007.16102v1. [Google Scholar]
- Wei, J.; Suriawinata, A.; Ren, B.; Liu, X.; Lisovsky, M.; Vaickus, L.; Brown, C.; Baker, M.; Nasir-Moin, M.; Tomita, N.; et al. Learn Like a Pathologist: Curriculum Learning by Annotator Agreement for Histopathology Image Classification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikola, HI, USA, 5–9 January 2021; pp. 2473–2483. [Google Scholar] [CrossRef]
- Wang, K.; Zhang, X.; Huang, S.; Chen, F.; Zhang, X.; Huangfu, L. Learning to Recognize Thoracic Disease in Chest X-Rays With Knowledge-Guided Deep Zoom Neural Networks. IEEE Access 2020, 8, 159790–159805. [Google Scholar] [CrossRef]
- Huang, X.; Fang, Y.; Lu, M.; Yan, F.; Yang, J.; Xu, Y. Dual-Ray Net: Automatic Diagnosis of Thoracic Diseases Using Frontal and Lateral Chest X-rays. J. Med. Imaging Heal. Inform. 2020, 10, 348–355. [Google Scholar] [CrossRef]
- Yang, Z.; Cao, Z.; Zhang, Y.; Han, M.; Xiao, J.; Huang, L.; Wu, S.; Ma, J.; Chang, P. MommiNet: Mammographic Multi-view Mass Identification Networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2020; MICCAI 2020 Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12266. [Google Scholar] [CrossRef]
- Liu, Q.; Yu, L.; Luo, L.; Dou, Q.; Heng, P.A. Semi-Supervised Medical Image Classification With Relation-Driven Self-Ensembling Model. IEEE Trans. Med. Imaging 2020, 39, 3429–3440. [Google Scholar] [CrossRef]
- Hsu, S.-M.; Kuo, W.-H.; Kuo, F.-C.; Liao, Y.-Y. Breast tumor classification using different features of quantitative ultrasound parametric images. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 623–633. [Google Scholar] [CrossRef]
- Antropova, N.; Huynh, B.Q.; Giger, M.L. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med. Phys. 2017, 44, 5162–5171. [Google Scholar] [CrossRef] [PubMed]
- Xie, Y.; Zhang, J.; Liu, S.; Cai, W.; Xia, Y. Lung nodule classification by jointly using visual descriptors and deep features. In Medical Computer Vision and Bayesian and Graphical Models for Biomedical Imaging; Springer: Cham, Germany, 2016; pp. 116–125. [Google Scholar] [CrossRef]
- Chai, Y.; Liu, H.; Xu, J. Glaucoma diagnosis based on both hidden features and domain knowledge through deep learning models. Knowledge-Based Syst. 2018, 161, 147–156. [Google Scholar] [CrossRef]
- Hagerty, J.R.; Stanley, R.J.; Almubarak, H.A.; Lama, N.; Kasmi, R.; Guo, P.; Drugge, R.J.; Rabinovitz, H.S.; Oliviero, M.; Stoecker, W.V. Deep Learning and Handcrafted Method Fusion: Higher Diagnostic Accuracy for Melanoma Dermoscopy Images. IEEE J. Biomed. Heal. Inform. 2019, 23, 1385–1391. [Google Scholar] [CrossRef]
- Buty, M.; Xu, Z.; Gao, M.; Bagci, U.; Wu, A.; Mollura, D.J. Characterization of Lung Nodule Malignancy Using Hybrid Shape and Appearance Features. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2016; pp. 662–670. [Google Scholar] [CrossRef] [Green Version]
- Saba, T.; Mohamed, A.S.; El-Affendi, M.; Amin, J.; Sharif, M. Brain tumor detection using fusion of hand crafted and deep learning features. Cogn. Syst. Res. 2020, 59, 221–230. [Google Scholar] [CrossRef]
- Yang, J.; Dvornek, N.C.; Zhang, F.; Chapiro, J.; Lin, M.; Duncan, J.S. Unsupervised domain adaptation via disentangled representations: Application to cross-modality liver segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2019; pp. 255–263. [Google Scholar] [CrossRef] [Green Version]
- Liu, T.; Guo, Q.; Lian, C.; Ren, X.; Liang, S.; Yu, J.; Niu, L.; Sun, W.; Shen, D. Automated detection and classification of thyroid nodules in ultrasound images using clinical-knowledge-guided convolutional neural networks. Med. Image Anal. 2019, 58, 101555. [Google Scholar] [CrossRef]
- Feng, H.; Cao, J.; Wang, H.; Xie, Y.; Yang, D.; Feng, J.; Chen, B. A knowledge-driven feature learning and integration method for breast cancer diagnosis on multi-sequence MRI. Magn. Reson. Imaging 2020, 69, 40–48. [Google Scholar] [CrossRef]
- Chen, S.; Qin, J.; Ji, X.; Lei, B.; Wang, T.; Ni, D.; Cheng, J.-Z. Automatic Scoring of Multiple Semantic Attributes With Multi-Task Feature Leverage: A Study on Pulmonary Nodules in CT Images. IEEE Trans. Med. Imaging 2017, 36, 802–814. [Google Scholar] [CrossRef]
- Murthy, V.; Hou, L.; Samaras, D.; Kurc, T.M.; Saltz, J.H. Center-focusing multi-task CNN with injected features for classification of glioma nuclear images. In Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 24–31 March 2017; pp. 834–841. [Google Scholar] [CrossRef]
- Yu, S.; Zhou, H.Y.; Ma, K.; Bian, C.; Chu, C.; Liu, H.; Zheng, Y. Difficulty-Aware Glaucoma Classification with Multi-rater Consensus Modeling. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2020; pp. 741–750. [Google Scholar] [CrossRef]
- Wang, W.; Lu, Y.; Wu, B.; Chen, T.; Chen, D.Z.; Wu, J. Deep active self-paced learning for accurate pulmonary nodule segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2018; pp. 723–731. [Google Scholar] [CrossRef]
- Roth, H.R.; Lu, L.; Seff, A.; Cherry, K.M.; Hoffman, J.; Wang, S.; Liu, J.; Turkbey, E.; Summers, R.M. A new 2.5 D representation for lymph node detection using random sets of deep convolutional neural network observations. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2014; pp. 520–527. [Google Scholar]
- Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [Green Version]
- Näppi, J.J.; Hironaka, T.; Regge, D.; Yoshida, H. Deep transfer learning of virtual endoluminal views for the detection of polyps in CT colonography. In Medical Imaging 2016: Computer-Aided Diagnosis; International Society for Optics and Photonics: Bellingham, WA, USA, 2016; Volume 9785, p. 97852B. [Google Scholar] [CrossRef]
- Zhang, R.; Zheng, Y.; Mak, T.W.C.; Yu, R.; Wong, S.H.; Lau, J.Y.W.; Poon, C.C.Y. Automatic Detection and Classification of Colorectal Polyps by Transferring Low-Level CNN Features From Nonmedical Domain. IEEE J. Biomed. Heal. Inform. 2017, 21, 41–47. [Google Scholar] [CrossRef]
- Zhao, J.; Li, D.; Kassam, Z.; Howey, J.; Chong, J.; Chen, B.; Li, S. Tripartite-GAN: Synthesizing liver contrast-enhanced MRI to improve tumor detection. Med. Image Anal. 2020, 63, 101667. [Google Scholar] [CrossRef]
- Zhang, J.; Saha, A.; Zhu, Z.; Mazurowski, M.A. Hierarchical Convolutional Neural Networks for Segmentation of Breast Tumors in MRI With Application to Radiogenomics. IEEE Trans. Med. Imaging 2019, 38, 435–447. [Google Scholar] [CrossRef]
- Setio, A.A.A.; Ciompi, F.; Litjens, G.; Gerke, P.; Jacobs, C.; van Riel, S.J.; Wille, M.M.W.; Naqibullah, M.; Sanchez, C.I.; van Ginneken, B. Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks. IEEE Trans. Med. Imaging 2016, 35, 1160–1169. [Google Scholar] [CrossRef]
- Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
- Liu, J.; Wang, D.; Lu, L.; Wei, Z.; Kim, L.; Turkbey, E.B.; Sahiner, B.; Petrick, N.A.; Summers, R.M. Detection and diagnosis of colitis on computed tomography using deep convolutional neural networks. Med. Phys. 2017, 44, 4630–4642. [Google Scholar] [CrossRef]
- Ruhan, S.; Owens, W.; Wiegand, R.; Studin, M.; Capoferri, D.; Barooha, K.; Greaux, A.; Rattray, R.; Hutton, A.; Cintineo, J.; et al. Intervertebral disc detection in X-ray images using faster R-CNN. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2017, 2017, 564–567. [Google Scholar] [CrossRef]
- Ben-Ari, R.; Akselrod-Ballin, A.; Karlinsky, L.; Hashoul, S. Domain specific convolutional neural nets for detection of architectural distortion in mammograms. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 552–556. [Google Scholar] [CrossRef]
- Platania, R.; Shams, S.; Yang, S.; Zhang, J.; Lee, K.; Park, S.J. Automated breast cancer diagnosis using deep learning and region of interest detection (BC-Droid). In Proceedings of the 8th ACM international conference on bioinformatics, computational biology, and health informatics, Boston, MA, USA, 20–23 August 2017; pp. 536–543. [Google Scholar] [CrossRef]
- Li, N.; Liu, H.; Qiu, B.; Guo, W.; Zhao, S.; Li, K.; He, J. Detection and attention: Diagnosing pulmonary lung cancer from CT by imitating physicians. arXiv 2017, arXiv:1712.05114v1. [Google Scholar]
- Cai, G.; Chen, J.; Wu, Z.; Tang, H.; Liu, Y.; Wang, S.; Su, S. One stage lesion detection based on 3D context convolutional neural networks. Comput. Electr. Eng. 2019, 79, 106449. [Google Scholar] [CrossRef]
- Ni, Q.; Sun, Z.Y.; Qi, L.; Chen, W.; Yang, Y.; Wang, L.; Zhang, X.; Yang, L.; Fang, Y.; Xing, Z.; et al. A deep learning approach to characterize 2019 coronavirus disease (COVID-19) pneumonia in chest CT images. Eur. Radiol. 2020, 30, 6517–6527. [Google Scholar] [CrossRef]
- Lisowska, A.; Beveridge, E.; Muir, K.; Poole, I. Thrombus detection in ct brain scans using a convolutional neural network. In Proceedings of the 10th International Joint Conference on Biomedical Engineering Systems and Technologies—BIOIMAGING, Porto, Portugal, 21–23 February 2017; Volume 3, pp. 24–33. [Google Scholar] [CrossRef]
- Lisowska, A.; O’Neil, A.; Dilys, V.; Daykin, M.; Beveridge, E.; Muir, K.; Mclaughlin, S.; Poole, I. Context-aware convolutional neural networks for stroke sign detection in non-contrast CT scans. In Proceedings of the Annual Conference on Medical Image Understanding and Analysis, Edinburgh, UK, 11–13 July 2017; Springer: Cham, Switzerland, 2017; pp. 494–505. [Google Scholar]
- Li, H.; Liu, X.; Boumaraf, S.; Liu, W.; Gong, X.; Ma, X. A New Three-stage Curriculum Learning Approach for Deep Network Based Liver Tumor Segmentation. In 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Li, X.; Qin, G.; He, Q.; Sun, L.; Zeng, H.; He, Z.; Chen, W.; Zhen, X.; Zhou, L. Digital breast tomosynthesis versus digital mammography: Integration of image modalities enhances deep learning-based breast mass classification. Eur. Radiol. 2020, 30, 778–788. [Google Scholar] [CrossRef]
- Bakalo, R.; Ben-Ari, R.; Goldberger, J. Classification and Detection in Mammograms with Weak Supervision via Dual Branch Deep Neural Net. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1905–1909. [Google Scholar] [CrossRef] [Green Version]
- Liang, G.; Wang, X.; Zhang, Y.; Jacobs, N. Weakly-Supervised Self-Training for Breast Cancer Localization. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; IEEE: Piscataway, NJ, USA, 2019; pp. 1124–1127. [Google Scholar] [CrossRef]
- Fu, L.; Ma, J.; Ren, Y.; Han, Y.S.; Zhao, J. Automatic detection of lung nodules: False positive reduction using convolution neural networks and handcrafted features. In Medical Imaging 2017: Computer-Aided Diagnosis; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; Volume 10134, p. 101340A. [Google Scholar] [CrossRef]
- Chao, C.H.; Zhu, Z.; Guo, D.; Yan, K.; Ho, T.Y.; Cai, J.; Harrison, A.P.; Ye, X.; Xiao, J.; Yuille, A.; et al. Lymph Node Gross Tumor Volume Detection in Oncology Imaging via Relationship Learning Using Graph Neural Network. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2020; pp. 772–782. [Google Scholar] [CrossRef]
- Sóñora-Mengan, A.; Gonidakis, P.; Jansen, B.; García-Naranjo, J.; Vandemeulebroucke, J. Evaluating several ways to combine handcrafted features-based system with a deep learning system using the LUNA16 Challenge framework. In Medical Imaging 2020: Computer-Aided Diagnosis; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; Volume 11314. [Google Scholar] [CrossRef]
- Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.-M.; Larochelle, H. Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef] [Green Version]
- Christ, P.F.; Ettlinger, F.; Grün, F.; Elshaera, M.E.A.; Lipkova, J.; Schlecht, S.; Ahmaddy, F.; Tatavarty, S.; Bickel, M.; Bilic, P.; et al. Automatic liver and tumor segmentation of CT and MRI volumes using cascaded fully convolutional neural networks. arXiv 2017, arXiv:1702.05970v2. [Google Scholar]
- Roth, H.R.; Farag, A.; Lu, L.; Turkbey, E.B.; Summers, R.M. Deep convolutional networks for pancreas segmentation in CT imaging. In Medical Imaging 2015: Image Processing; International Society for Optics and Photonics: Bellingham, WA, USA, 2015; Volume 9413, p. 94131G. [Google Scholar] [CrossRef] [Green Version]
- Wu, L.; Xin, Y.; Li, S.; Wang, T.; Heng, P.A.; Ni, D. Cascaded fully convolutional networks for automatic prenatal ultrasound image segmentation. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 663–666. [Google Scholar] [CrossRef]
- Zeng, G.; Yang, X.; Li, J.; Yu, L.; Heng, P.A.; Zheng, G. 3D U-net with multi-level deep supervision: Fully automatic segmentation of proximal femur in 3D MR images. In International Workshop on Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2017; pp. 274–282. [Google Scholar] [CrossRef]
- Valverde, S.; Salem, M.; Cabezas, M.; Pareto, D.; Vilanova, J.C.; Ramió-Torrentà, L.; Rovira, À.; Salvi, J.; Oliver, A.; Llado, X. One-shot domain adaptation in multiple sclerosis lesion segmentation using convolutional neural networks. NeuroImage Clin. 2019, 21, 101638. [Google Scholar] [CrossRef]
- Chen, C.; Dou, Q.; Chen, H.; Heng, P.A. Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation. In International Workshop on Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2018; pp. 143–151. [Google Scholar] [CrossRef] [Green Version]
- Hu, M.; Maillard, M.; Zhang, Y.; Ciceri, T.; La Barbera, G.; Bloch, I.; Gori, P. Knowledge distillation from multi-modal to mono-modal segmentation networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2020; pp. 772–781. [Google Scholar] [CrossRef]
- Kamnitsas, K.; Baumgartner, C.; Ledig, C.; Newcombe, V.; Simpson, J.; Kane, A.; Menon, D.; Nori, A.; Criminisi, A.; Rueckert, D.; et al. Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In International Conference on Information Processing in Medical Imaging; Springer: Cham, Switzerland, 2017; pp. 597–609. [Google Scholar] [CrossRef] [Green Version]
- Yu, F.; Zhao, J.; Gong, Y.; Wang, Z.; Li, Y.; Yang, F.; Dong, B.; Li, Q.; Zhang, L. Annotation-free cardiac vessel segmentation via knowledge transfer from retinal images. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2019; pp. 714–722. [Google Scholar] [CrossRef] [Green Version]
- Izadi, S.; Mirikharaji, Z.; Kawahara, J.; Hamarneh, G. Generative adversarial networks to segment skin lesions. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 881–884. [Google Scholar] [CrossRef]
- Lahiri, A.; Ayush, K.; Kumar Biswas, P.; Mitra, P. Generative adversarial learning for reducing manual annotation in semantic segmentation on large scale miscroscopy images: Automated vessel segmentation in retinal fundus image as test case. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 42–48. [Google Scholar] [CrossRef]
- Schlegl, T.; Seeböck, P.; Waldstein, S.M.; Schmidt-Erfurth, U.; Langs, G. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International Conference on Information Processing in Medical Imaging; Springer: Cham, Switzerland, 2017; pp. 146–157. [Google Scholar] [CrossRef] [Green Version]
- Berger, L.; Eoin, H.; Cardoso, M.J.; Ourselin, S. An adaptive sampling scheme to efficiently train fully convolutional networks for semantic segmentation. In Medical Image Understanding and Analysis; Springer: Cham, Switzerland, 2018; pp. 277–286. [Google Scholar] [CrossRef] [Green Version]
- Kervadec, H.; Dolz, J.; Granger, É.; Ayed, I.B. Curriculum semi-supervised segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2019; pp. 568–575. [Google Scholar] [CrossRef] [Green Version]
- Zhao, Z.; Zhang, X.; Chen, C.; Li, W.; Peng, S.; Wang, J.; Yang, X.; Zhang, L.; Zeng, Z. Semi-supervised self-taught deep learning for finger bones segmentation. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Wang, G.; Xie, H.; Zhang, S.; Huang, N.; Zhang, S.; Gu, L. Weakly supervised vessel segmentation in X-ray angiograms by self-paced learning from noisy labels with suggestive annotation. Neurocomputing 2020, 417, 114–127. [Google Scholar] [CrossRef]
- Wu, B.; Zhou, Z.; Wang, J.; Wang, Y. Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1109–1113. [Google Scholar] [CrossRef] [Green Version]
- Chen, C.; Biffi, C.; Tarroni, G.; Petersen, S.; Bai, W.; Rueckert, D. Learning Shape Priors for Robust Cardiac MR Segmentation from Multi-view Images. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2019; MICCAI 2019 Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2019; Volume 11765. [Google Scholar] [CrossRef] [Green Version]
- Hatamizadeh, A.; Terzopoulos, D.; Myronenko, A. End-to-end boundary aware networks for medical image segmentation. In International Workshop on Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2019; pp. 187–194. [Google Scholar] [CrossRef] [Green Version]
- Jin, D.; Guo, D.; Ho, T.-Y.; Harrison, A.P.; Xiao, J.; Tseng, C.-K.; Lu, L. DeepTarget: Gross tumor and clinical target volume segmentation in esophageal cancer radiotherapy. Med. Image Anal. 2021, 68, 101909. [Google Scholar] [CrossRef]
- Kushibar, K.; Valverde, S.; González-Villà, S.; Bernal, J.; Cabezas, M.; Oliver, A.; Lladó, X. Automated sub-cortical brain structure segmentation combining spatial and deep convolutional features. Med. Image Anal. 2018, 48, 177–186. [Google Scholar] [CrossRef]
- Khan, H.; Shah, P.M.; Shah, M.A.; Islam, S.U.; Rodrigues, J. Cascading handcrafted features and Convolutional Neural Network for IoT-enabled brain tumor segmentation. Comput. Commun. 2020, 153, 196–207. [Google Scholar] [CrossRef]
- Narotamo, H.; Sanches, J.M.; Silveira, M. Combining Deep Learning with Handcrafted Features for Cell Nuclei Segmentation. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2020, 2020, 1428–1431. [Google Scholar] [CrossRef]
- Huang, K.; Cheng, H.D.; Zhang, Y.; Zhang, B.; Xing, P.; Ning, C. Medical knowledge constrained semantic breast ultrasound image segmentation. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 1193–1198. [Google Scholar] [CrossRef]
- Painchaud, N.; Skandarani, Y.; Judge, T.; Bernard, O.; Lalande, A.; Jodoin, P.-M. Cardiac Segmentation With Strong Anatomical Guarantees. IEEE Trans. Med. Imaging 2020, 39, 3703–3713. [Google Scholar] [CrossRef]
- Painchaud, N.; Skandarani, Y.; Judge, T.; Bernard, O.; Lalande, A.; Jodoin, P.M. Cardiac MRI segmentation with strong anatomical guarantees. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2019; pp. 632–640. [Google Scholar] [CrossRef] [Green Version]
- Yue, Q.; Luo, X.; Ye, Q.; Xu, L.; Zhuang, X. Cardiac segmentation from LGE MRI using deep neural network incorporating shape and spatial priors. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2019; pp. 559–567. [Google Scholar] [CrossRef] [Green Version]
- Mirikharaji, Z.; Hamarneh, G. Star shape prior in fully convolutional networks for skin lesion segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2018; pp. 737–745. [Google Scholar] [CrossRef] [Green Version]
- Ravishankar, H.; Venkataramani, R.; Thiruvenkadam, S.; Sudhakar, P.; Vaidya, V. Learning and incorporating shape models for semantic segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2017; pp. 203–211. [Google Scholar] [CrossRef]
- Zheng, H.; Lin, L.; Hu, H.; Zhang, Q.; Chen, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Tong, R.; Wu, J. Semi-supervised segmentation of liver using adversarial learning with deep atlas prior. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2019; pp. 148–156. [Google Scholar] [CrossRef]
- Oktay, O.; Ferrante, E.; Kamnitsas, K.; Heinrich, M.P.; Bai, W.; Caballero, J.; Cook, S.A.; De Marvao, A.; Dawes, T.; O’Regan, D.P.; et al. Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation. IEEE Trans. Med. Imaging 2018, 37, 384–395. [Google Scholar] [CrossRef] [Green Version]
- Dalca, A.V.; Guttag, J.; Sabuncu, M.R. Anatomical priors in convolutional networks for unsupervised biomedical segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9290–9299. [Google Scholar] [CrossRef] [Green Version]
- He, Y.; Yang, G.; Chen, Y.; Kong, Y.; Wu, J.; Tang, L.; Zhu, X.; Dillenseger, J.L.; Shao, P.; Zhang, S.; et al. Dpa-densebiasnet: Semi-supervised 3d fine renal artery segmentation with dense biased network and deep priori anatomy. In International Conference on Medical Image Computing and Computer-Assisted Interventio; Springer: Cham, Switzerland, 2019; pp. 139–147. [Google Scholar] [CrossRef] [Green Version]
- Song, Y.; Zhu, L.; Lei, B.; Sheng, B.; Dou, Q.; Qin, J.; Choi, K.S. Shape Mask Generator: Learning to Refine Shape Priors for Segmenting Overlapping Cervical Cytoplasms. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2020; pp. 639–649. [Google Scholar] [CrossRef]
- Boutillon, A.; Borotikar, B.; Burdin, V.; Conze, P.H. Combining shape priors with conditional adversarial networks for improved scapula segmentation in MR images. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1164–1167. [Google Scholar] [CrossRef]
- Pham, D.D.; Dovletov, G.; Pauli, J. Liver Segmentation in CT with MRI Data: Zero-Shot Domain Adaptation by Contour Extraction and Shape Priors. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1538–1542. [Google Scholar] [CrossRef]
- Engin, M.; Lange, R.; Nemes, A.; Monajemi, S.; Mohammadzadeh, M.; Goh, C.K.; Tu, T.M.; Tan, B.Y.; Paliwal, P.; Yeo, L.L. Agan: An anatomy corrector conditional generative adversarial network. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2020; pp. 708–717. [Google Scholar] [CrossRef]
- Gao, Y.; Huang, R.; Yang, Y.; Zhang, J.; Shao, K.; Tao, C.; Chen, Y.; Metaxas, D.N.; Li, H.; Chen, M. FocusNetv2: Imbalanced large and small organ segmentation with adversarial shape constraint for head and neck CT images. Med. Image Anal. 2021, 67, 101831. [Google Scholar] [CrossRef]
- Yang, G.; Yu, S.; Dong, H.; Slabaugh, G.G.; Dragotti, P.L.; Ye, X.; Liu, F.; Arridge, S.R.; Keegan, J.; Guo, Y.; et al. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 1310–1321. [Google Scholar] [CrossRef] [Green Version]
- Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Content-Based Brain Tumor Retrieval for MR Images Using Transfer Learning. IEEE Access 2019, 7, 17809–17822. [Google Scholar] [CrossRef]
- Khatami, A.; Babaie, M.; Tizhoosh, H.; Khosravi, A.; Nguyen, T.; Nahavandi, S. A sequential search-space shrinking using CNN transfer learning and a Radon projection pool for medical image retrieval. Expert Syst. Appl. 2018, 100, 224–233. [Google Scholar] [CrossRef]
- Anavi, Y.; Kogan, I.; Gelbart, E.; Geva, O.; Greenspan, H. A comparative study for chest radiograph image retrieval using binary texture and deep learning classification. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2940–2943. [Google Scholar] [CrossRef]
- Anavi, Y.; Kogan, I.; Gelbart, E.; Geva, O.; Greenspan, H. Visualizing and enhancing a deep learning framework using patients age and gender for chest x-ray image retrieval. In Medical Imaging 2016: Computer-Aided Diagnosis; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; Volume 9785, p. 978510. [Google Scholar] [CrossRef]
- Ahmad, J.; Sajjad, M.; Mehmood, I.; Baik, S.W. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs. PLoS ONE 2017, 12, e0181707. [Google Scholar] [CrossRef] [Green Version]
- Ursuleanu, T.F.; Luca, A.R.; Gheorghe, L.; Grigorovici, R.; Iancu, S.; Hlusneac, M.; Preda, C.; Grigorovici, A. The Use of Artificial Intelligence on Segmental Volumes, Constructed from MRI and CT Images, in the Diagnosis and Staging of Cervical Cancers and Thyroid Cancers: A Study Protocol for a Randomized Controlled Trial. J. Biomed. Sci. Eng. 2021, 14, 300–304. [Google Scholar] [CrossRef]
- Luca, A.; Ursuleanu, T.; Gheorghe, L.; Grigorovici, R.; Iancu, S.; Hlusneac, M.; Preda, C.; Grigorovici, A. The Use of Artificial Intelligence on Colposcopy Images, in the Diagnosis and Staging of Cervical Precancers: A Study Protocol for a Randomized Controlled Trial. J. Biomed. Sci. Eng. 2021, 14, 266–270. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Adv. Neural Inf. Process. Syst. 2014, 3. [Google Scholar] [CrossRef]
- Ching, T.; Himmelstein, D.S.; Beaulieu-Jones, B.K.; Kalinin, A.A.; Do, B.T.; Way, G.P.; Ferrero, E.; Agapow, P.-M.; Zietz, M.; Hoffman, M.M.; et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 2018, 15, 20170387. [Google Scholar] [CrossRef] [Green Version]
- Long, M.; Zhu, H.; Wang, J.; Jordan, M.I. Unsupervised domain adaptation with residual transfer networks. arXiv 2016, arXiv:1602.04433v2. [Google Scholar]
- Tzeng, E.; Hoffman, J.; Saenko, K.; Darrell, T. Adversarial discriminative domain adaptation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7167–7176. [Google Scholar] [CrossRef] [Green Version]
- Luo, Y.; Zheng, L.; Guan, T.; Yu, J.; Yang, Y. Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 2507–2516. [Google Scholar] [CrossRef] [Green Version]
- Tsai, Y.H.; Hung, W.C.; Schulter, S.; Sohn, K.; Yang, M.H.; Chandraker, M. Learning to adapt structured output space for semantic segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7472–7481. [Google Scholar] [CrossRef] [Green Version]
- Jiang, J.; Hu, Y.-C.; Tyagi, N.; Zhang, P.; Rimner, A.; Mageras, G.S.; Deasy, J.O.; Veeraraghavan, H. Tumor-Aware, Adversarial Domain Adaptation from CT to MRI for Lung Cancer Segmentation. Med. Image Comput. Comput. Assist Interv. 2018, 11071, 777–785. [Google Scholar] [CrossRef]
- Liu, J.; Li, W.; Zhao, N.; Cao, K.; Yin, Y.; Song, Q.; Chen, H.; Gong, X. Integrate domain knowledge in training CNN for ultrasonography breast cancer diagnosis. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2018; pp. 868–875. [Google Scholar] [CrossRef]
- Wang, Z.; Zhang, J.; Feng, J.; Chen, Z. Knowledge graph and text jointly embedding. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1591–1601. [Google Scholar] [CrossRef] [Green Version]
- Alzubaidi, L.; Fadhel, M.A.; Al-Shamma, O.; Zhang, J.; Santamaría, J.; Duan, Y.; Oleiwi, S.R. Towards a Better Understanding of Transfer Learning for Medical Imaging: A Case Study. Appl. Sci. 2020, 10, 4523. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Al-Amidie, M.; Al-Asadi, A.; Humaidi, A.; Al-Shamma, O.; Fadhel, M.; Zhang, J.; Santamaría, J.; Duan, Y. Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data. Cancers 2021, 13, 1590. [Google Scholar] [CrossRef] [PubMed]
- Wistuba, M.; Rawat, A.; Pedapati, T. A survey on neural architecture search. arXiv 2019, arXiv:1905.01392v2. [Google Scholar]
- Guo, D.; Jin, D.; Zhu, Z.; Ho, T.Y.; Harrison, A.P.; Chao, C.H.; Xiao, J.; Lu, L. Organ at risk segmentation for head and neck cancer using stratified learning and neural architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 4223–4232. [Google Scholar] [CrossRef]
Task | Contribution | Model |
---|---|---|
Classification | Benefit from unlabelled data for lung tumour stratification | DBN [40] |
Introduction of a transfer learning approach in rectal cancer prediction | CNN [41] | |
Identification of bladder tumour sub-types from histopathological images | ResNet [42] | |
Improvement in breast tumour estimation by considering a large set of risk factors | CNN [43] | |
Estimation of the cancer grade | CNN [44] | |
Estimation of the cancer type | CNN [45,46], ResNet [47] | |
Limitation of overfitting | GAN [48], ResNet [49] | |
Analysis of the particular characteristics of the heart by using echocardiograms | ResNet [50] | |
Improvement in bone image quality | U-Net [51] | |
Analysis of the impact of gender on skeletal muscles | CNN [52] | |
Automatic estimation of brain diseases risk | AlexNet [53], CNN [54] | |
Improvement of accuracy and efficiency in COP diseases | ResNet [55], VGGNet + CNN [56], DBN [57] | |
Analysis of interstitial lung diseases | CNN [58] | |
Estimation of the normal levels of the pancreas | CNN [59,60] | |
Improvement in image quality | CNN [61], CNN + LSTM [62] | |
Improvement in accuracy in abdominal ultrasounds | CNN [63] | |
Detection | Optimal localization of lung cancer sub-types | CNN [64] |
Low-cost object detection for malaria | YOLO [65] | |
Improvement in image accuracy in neoplasia analysis | ResNet [66] | |
Segmentation | Analysis of colour contrast and parameter variability issues in pancreatic tumour | U-Net [67] |
Impact of dimension variations on DL model performance in thyroid melanomas | U-Net [68] | |
Limitation of the overfitting problem in bone cancer | CNN [69], GAN + U-Net [70] | |
Improvement in image accuracy in lung and prostate cancer | U-Net [71,72], GAN [73] | |
DL model for multi-step integration and registration error reduction in atrial fibrillation analysis | CNN + LSTM [74] | |
Accuracy in the analysis of irregular pelvic hematoma images | U-Net [75] | |
Improvement in aortic disease analysis with the introduction of new accuracy measures | U-Net [76] | |
Introduction of the transfer learning approach in atrium study | U-Net [49] | |
Analysis of the impact of the image quality in osteoarthritis | U-Net [77], RCNN [78] | |
Introduction of transfer learning and attention mechanism in the study of the knees | VGGNet + U-Net [79] | |
Improvement in image accuracy of the cartilage | U-Net [80], HNN [15], U-Net + GAN [81], RCNN | |
Combination of the region-based approach with U-Net for bone diseases | RCC + U-Net [82] | |
Limitation of overfitting in White Matter analysis | GAN [83] | |
Colour quality improvement in orbital analysis | U-Net [84] | |
Segmentation of lung lobe using different types of datasets | U-Net [85] | |
Analysis of image effects in neoplasia and catheter detection | U-Net [66], RNN [86] | |
Reconstruction | Improvement in the Signal-to-Noise Ratio Multi-data integration | CNN [87] |
Improvement in image quality at high levels in the study of coronary diseases | CNN [88] | |
Application of CNNs to computed tomography for chest digital images | CNN [89] | |
Introduction of a DAE as a priori model for noise density in magnetic resonance | DAE [90] | |
Analysis of perturbation effects | CNN [91] | |
Introduction of transfer learning into magnetic resonance | CNN [92] | |
Limitation of overfitting | CNN + GAN [93] |
Type of Data | Sample | Objective | Model Design | Results | Therapeutic Area | Paper | |
---|---|---|---|---|---|---|---|
Mammography | Mammography images | 45,000 images | Diagnosis of breast cancer | CNN | AUC of 0.90 | Oncology | [40] |
Mammography | 667 benign, 333 malignant | Diagnosis of early breast cancer | Stacked AE | AUC of 0.89 | Oncology | [127] | |
Mammography images, biopsy result of the lesions | 600 images biopsy | Differentiation benign lesions one malignant masses | CNN | AUC of 0.80 | Oncology | [125] | |
Mammography images | 840 mammograms images | Evaluate the risk of coronary disease used breast arterial calcification classifier | CNN | Misclassified cases of 6% | Cardiovascular | [71] | |
Digital mammograms | 661 digital images | Estimation of breast percentage density | CNN | AUC of 0.981 | Oncology | [80] | |
Mammography images | Mammograms from 604 women | Segment areas in the breast | CNN | AUC of 0.66 | Oncology | [49] | |
Digital mammograms images | 29,107 mammograms images | Probability of cancer | CNN | AUC of 0.90 | Oncology | [87] | |
Ultrasound | Image of the heart 2D | 400 images with five different heart diseases and 80 normal echocardiogram images | Segment left ventricle images with greater precision | Deep belief networks | Hammoude distance of 0.80 | Cardiovascular | [77] |
Ultrasound images | 306 malignant tumor images, 136 benign tumors images | Detect and differentiate breast lesions with ultrasound | CNN, AlexNet, U-Net, LeNet | 0.91 and 0.89 depending on the data | Oncology | [65] | |
Transesophageal ultrasound volume and 3D geometry of the aortic valve images | 3795 volumes from the aortic valves from 150 patients | Diagnose, stratification and treatment planning for patients with aortic valve pathologies | Marginal space deep learning | Position error of 1.66 mms and mean corner distance error of 3.29 mms | Cardiovascular | [45] | |
Radiography | Radiography images | 7821 subjects | CAD for diagnosis of knee osteoarthritis | Deep Siamese | AUC of 0.66 | Traumatology | [141] |
Radiography images | 420 radiography images | Osteoarthritis diagnosis | CNN | AUC of 0.92 | Traumatology | [81] | |
Radiographs | 112,120 frontal view chest17,202 frontal view chest radiographs with abinary class label for normal vs abnormal | Abnormality detection in chest radiographs | CNN | AUROCs of 0.960 and 0.951. AUROCs of 0.900 and 0.893 | Radiology | [78] | |
Slide image | Pathology cancer images (hematoxylin and eosin) | 5202 images tumorinfiltrating lymphocytes | Study of tumor tissue samples. Localize areas of necrosis and lymphocyte infiltration | Two CNNs | AUC of 0.95 | Oncology | [118] |
Giemsa-stained thin blood smear slides cell images | 27,558 cell images | Screening system for Malaria | CNN | AUC of 0.94 | Infectious Disease | [121] | |
Microscopy image patches | 249 histologic images | Classification of breast cancer histology microscopy images | CNN and SVM | AUC of 0.77–0.83 for carcinoma/noncarcinoma classification | Oncology | [134] | |
Microscopy histopathological images | 7909 images of breast cancers | CAD for breast cancer histopathological diagnosis | CNN | AUC of 0.93 | Oncology | [135] | |
Microscope images | 200 female subjects aged from 22 to 64 | Cervix cancer screening | Multiscale CNN | Mean and standard deviation of 0.95 and 0.18 | Oncology | [88] | |
Whole-slide prostate histopathology images | 2663 images of prostate histopathology images | Whole-slide histopathology images to outline the malignant regions | CNN | Dice coefficient of 0.72 | Oncology | [78] | |
Ocular fundus | 2D images | 243 retina images | Diagnose retinal lesions | CNN | Precision recall curve of 0.86 in microaneurysms and 0.64 in exudates | Ophthalmology | [120] |
2D images | 85,000 images | Diabetic retinopathy detection and stage classification | Bayesian CNN | AUC value of 0.99 | Ophthalmology | [42] | |
Images | 6679 images from Kaggle’s Diabetic Retinopathy Detection | Detect retinal hemorrhages | CNN | AUC of 0.894 and 0.972 | Ophthalmology | [47] | |
Images | 168 images with glaucoma and 428 control | Detect and evaluate glaucoma | CNN: ResNet and U-Net | AUC of 0.91 and 0.84 respectively | Ophthalmology | [128] | |
Images | 90,000 images with their diagnoses | Predict the evolution of diabetic retinopathy | CNN | AUC of 0.95 | Ophthalmology | [51] | |
Images | 7000 colour fundus images | Image quality of diabetic retinopathy | CNN | Accuracy of 100 % | Ophthalmology | [52] | |
AREDS (age related eye disease study) image | 130,000 fundus images | Diagnosis of Age-related Macular Degeneration | CNN | 94.97 sensitivity and 98.32 % specificity | Ophthalmology | [156] | |
Fundus images | 219,302 from normal participants without hypertension, diabetes mellitus (DM), and any smoking history | Predict age and sex from retinal fundus images | CNN | AUC 0.96 | Ophthalmology | [157] | |
Dermoscopy | Images | 350 images of melanomas and 374 benign nevi | Acral lentiginous melanoma diagnosis | CNN | AUC of over 0.80 | Oncology | [129] |
Clinical images | 49,567 images | Recognize nails nychomycosis lesions | Region-based-CNN | AUC of 0.98, AUC of 0.95, AUC of 0.93, AUC of 0.82 | Dermatology | [130] | |
Myocardial perfusion images | 1638 patients | Obstructive coronary disease prediction | CNN | Sensitivity value of 0.82 and 0.69 for both use cases | Cardiovascular | [91] | |
Arterial labeling | Arterial spin labeling (ASL) perfusion images | 140 subjects | Monitoring cerebral arterial perfusion | CNN | AUC of 0.94 | Cardiovascular | [44] |
Frames from endoscopy | Frames from endoscopy videos | 205 normal and 360 abnormal images | Detection and localization of gastrointestinal anomalies | CNN | AUC of over 0.80 | Gastroenterology | [72] |
Tracking dataset multi-instrument Endo-Visceral Surgery and multi-instrument in vivo | Single-instrument Retinal Microsurgery Instrument Tracking dataset, More-instrument Endo-Visceral surgery and multi-instrument in vivo images | 940 frames of the training data (4479 frames) and 910 frames for the test data (4495 frames) | Detect the two-dimensional position of different medical instruments in endoscopy and microscopy surge | Convolutional Detection regression network | AUC of 0.94 | Robotic Surgery | [76] |
CT/PET-CT/SPECT | Nuclear MRIs 3D | 124 double echography | Diagnose possible soft tissue injuries | Deep Resolve, a 3D-CNN model | MSE of 0.008 | Traumatology | [53] |
Retinal 3D images obtained by Optical Coherence Tomography | 269 patients with AMD, 115 control patients | Retina age-related macular degeneration diagnostic | CNN | AUC of 0 | Ophthalmology | [158] | |
123I-fluoropropyl carbomethoxyiodophenyl nortropane single-photon emission computed tomography (FP-CIT SPECT) 2D images | 431 patient cases | Automatic interpretation system in Parkinson’s disease | CNN | AUC of 0.96 | Neurology- Psychiatry | [84] | |
Abdominal CT 3D images | 231 abdominal CT | Classify tomography and evaluate the malignity degree in gastro-intestinal stromal tumors (GISTs) | Hybrid system between convolutional networks and radiomics | AUC of 0.882 | Oncology | [83] | |
CT image patches 2D | 14,696 images | Diagnose interstitial lung disease | CNN | AUC of 0.85 | Pneumology | [46] | |
3D MRI and PET | 93 Alzheimer Disease, 204 MCI Mild Cognitive Impairment converters and normal control subjects | Diagnose early Alzheimer disease stages | Multimodal DBM | AUC of 0.75–0.95 | Neurology- Psychiatry | [41] | |
MRI | Diffusion-weighted imaging maps using MRI | 222 patients. 187 treated with rtPA (recombinant tissue-type plasminogen activator) | Decide Acute Ischemic Stroke patients’ treatment through volume lesions prediction | CNN | AUC of 0.88 | Neurology- Psychiatry | [122] |
Magnetic resonance images | 474 patients with schizophrenia and 607 healthy subjects | Schizophrenia detection | Deep discriminant autoencoder network | Accuracy over 0.8 | Neurology- Psychiatry | [124] | |
Gadoxetic acid–enhanced 2D MRI | 144,180 images from 634 patients | Staging liver fibrosis through MR | CNN | AUC values of 0.84, 0.84, and 0.85 for each stage | Gastroenterology | [64] | |
Resting state functional magnetic resonance imaging (rs-fMRI), T1 structural cerebral images and phenotypic information | 505 individuals with autism and 520 matched typical controls | Identify different autism spectrum disorders | Denoising AE | Accuracy of 0.70 | Neurology- Psychiatry | [126] | |
3D MRI and PET | 93 Alzheimer Disease, 204 MCI Mild Cognitive Impairment converters and normal control subjects | CAD for early Alzheimer disease stages | Multimodal DBM | Accuracy of 0.95, 0.85 and 0.75 for the three use cases | Neurology- Psychiatry | [41] | |
CT/PET-CT/SPECT | CT images, MRI images and PET images | 6776 images | Classify medical diagnostic images according to the modality they were produced and classify illustrations according to their production attributes | CNN and a synergic signal system | AUC of 0.86 | Various | [159] |
CT image 2D | 63,890 patients with cancer and 171,345 healthy | Discriminate lung cancer lesions in adenocarcinoma, squamous and small cell carcinoma | CNN | Log-Loss error of 0.66 with a sensitivity of 0.87 | Oncology | [160] | |
CT 2D images | 3059 images from several parts of human body | Speed up CT images collection and rebuild the data | Dense-Net and CNN | RMSE of 0.00048 | Various | [142] | |
CT images 3D | 6960 lung nodule regions, 3480 of which were positive samples and rest were negative samples (nonnodule) | Diagnose lung cancer in low-dosage CT | Eye-tracking sparse attentional model and CNN | Accuracy of 0.97 | Oncology | [90] | |
CT images 2D and text (reports) | 9000 training and 1000 testing images | Processing text from CT reports in order to classify their respective images | CNN | AUC of 0,58, 0,70–0.95 | Various | [92] | |
Computed tomography (CT) | Three datasets: 224,316, 112,120 and 15,783 | Binary classification of posteroanterior chest x-ray | CNN | 92% accuracy | Radiology | [161] | |
MRI | Clinical characteristics and MRI 3D | 135 patients with short-, medium- and long-term survival | Predict the survival of patients with amyotrophic lateral sclerosis | CNN | Accuracy of 0.84 | Neurology- Psychiatry | [67] |
Optical coherence tomography images | 52,690 AMD patients’ images and 48,312 control | Differentiate Age-Related Macular Degeneration lesions in optical coherence tomography | Modification of VGG16 CNN | AUC of 0.92, AUC of 0.93 and AUC of 0.97 for the different use cases | Ophthalmology | [68] | |
Lung computed axial tomography 2D images and breast ultrasound lesions | 520 breast sonograms from 520 patients (275 benign and 245 malignant lesions) and lung CT image data from 1010 patients (700 malignant and 700 benign nodules) | CAD system to classify breast ultrasound lesions and lung CT nodules | Stacked denoising AE | AUC of 0.94 | Oncology | [58] | |
MRI 2D | 444 images from 195 patients with prostate cancer | Prevent errors in diagnosing prostate | CNN | AUC of 0.94 | Oncology | [88] | |
MRI 2D | MICCAI 2009 left ventricle segmentation challenge database | Determinate limits between the endocardium and epicardium of the left ventricle | RNN with automatc segmentation techniqes | AUC of 1.0 in the best case | Cardiovascular | [132] | |
MRI | CT images, MRI images and PET images | 6776 images for training and 4166 for tests | Classify medical diagnostic images according to the modality they were produced and classify illustrations according to their production attributes | CNN and a synergic signal system | AUC of 0.86 | Various | [159] |
Functional MRI | 68 subjects perform 7 activities, and a state of rest | Analyze cerebral cognitive functions | 3D CNN, resting state networks | AUC of 0.94 | Neurology- Psychiatry | [140] | |
Liver MRIs | 522 liver MRI cases with and without contrast for known or suspected liver cirrhosis or focal liver lesion | Screening system for undiagnosed hepatic magnetic resonance images | CNN | Reduces negative predictive value and leads to greater precision | Gastroenterology | [50] | |
MRI images | 1064 brain images of autism patients and healthy controls. MRI data from 110 multiple sclerosis patient | Evaluate the quality of multicenter structural brain MRI images | CNN | AUC 0.90 and 0.71 | Radiology | [55] |
Dataset Images | Methods of Incorporating Information | Application in Medicine |
---|---|---|
Data doctors focus on | ||
Training pattern high-level medical data, curriculum learning | Training modelImages with increasing complexity | |
Diagnostic pattern, low-level medical data, areas of images, characteristics of diseases | General models of diagnosis of doctors | |
Area of interest, specific data identified by doctors, attention maps | “Attention maps” model of doctors | |
Dataset Images | Methods of Incorporating Information | Application in Medicine |
Attention characteristics | ||
Hand-made characteristics | Characteristics level fusion + Incorporation level fusion |
|
Incorporation level fusion | ||
Characteristics level fusion | ||
Incorporation patch characteristics | ||
MV-KBC | ||
DSc-GAN |
| |
As labels of CNNs | ||
Other types of information | ||
Additional category label, BI-RADS label (malignant/benign) |
| |
Additional clinical diagnosis reports (abstract descriptions) | ||
Dataset Images | Methods of Incorporating Information | Applications in Medicine |
Natural Datasets Images | ||
Natural images ImageNet 1 and COCO 2 | Transfer learning - fixed feature extracts - initialization | |
Medical Datasets Images | ||
Medical images PET CT, Mammography, X-ray, Retina-Net | Learning with more tasks (multi-task) |
|
Dataset Images | Methods of Incorporating Information | Applications in Medicine |
Data doctors focus on | ||
Training pattern high-level medical data, curriculum learning | Training model Images with increasing complexity | |
Diagnostic pattern, low-level medical data, areas of images, characteristics of diseases | General models of diagnosis of doctors |
|
Area of interest, specific data by doctors, “attention maps” | Models explicitly incorporates “attention maps” |
|
Hand-crafted features | Attention features | |
Dataset Images | Methods of Incorporating Information | Applications in Medicine |
Natural Datasets Images | ||
Natural images ImageNet 1, COCO 2, Data Set Sports-1M (1.1 million’s, video-sports) PASCALVOC dataset | Transfer learning - fixed feature extracts - initialization | |
Medical Datasets Images | ||
Medical images MRI data, CT angiography, 3DSeg-8 dataset | Learning with more tasks (multi-task) | |
Data doctors focus on | ||
Deep learning: FCN U-Net GAN | ||
Dataset Images | Methods of Incorporating Information | Applications in Medicine |
Data doctors focus on | ||
Training pattern high-level medical data, curriculum learning | Training model Images with increasing complexity Self-paced learning (SPL) SPL + active learning | |
Diagnostic pattern LIDC-IDRI dataset, BraTS 2018 dataset | General models of diagnosis of doctors | |
Area of interest BRATS2015 dataset, (ImageNet, video datasets Used for 3D image segmentation) | The fusion at the feature level + concatenate |
|
Specific characteristics (shape, location, topology) | In the post-processing stage | |
In the loss function | ||
Dataset Images | Methods of Incorporating Information | Applications in Medicine |
Data doctors focus on | ||
Series of measurements | Reconstruction of medical image | |
Content-based image (CBIR) External medical datasets and natural images | Recovery of medical image | |
Templates from the report of radiologist Visual characteristics of medical images, IU-RR datasets, text templates | Generating Medical Reports |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ursuleanu, T.F.; Luca, A.R.; Gheorghe, L.; Grigorovici, R.; Iancu, S.; Hlusneac, M.; Preda, C.; Grigorovici, A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics 2021, 11, 1373. https://doi.org/10.3390/diagnostics11081373
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics. 2021; 11(8):1373. https://doi.org/10.3390/diagnostics11081373
Chicago/Turabian StyleUrsuleanu, Tudor Florin, Andreea Roxana Luca, Liliana Gheorghe, Roxana Grigorovici, Stefan Iancu, Maria Hlusneac, Cristina Preda, and Alexandru Grigorovici. 2021. "Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images" Diagnostics 11, no. 8: 1373. https://doi.org/10.3390/diagnostics11081373
APA StyleUrsuleanu, T. F., Luca, A. R., Gheorghe, L., Grigorovici, R., Iancu, S., Hlusneac, M., Preda, C., & Grigorovici, A. (2021). Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics, 11(8), 1373. https://doi.org/10.3390/diagnostics11081373