Next Article in Journal
Phosphine Fumigation Followed by Cold Treatment to Control Peach Fruit Moth, Carposina sasakii, Larvae on “Fuji” Apples Intended for Export
Previous Article in Journal
Design Dimensions of Co-Located Multi-Device Audio Experiences
Previous Article in Special Issue
Hardware Optimizations of the X-ray Pre-Processing for Interventional Computed Tomography Using the FPGA
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Image Processing and Analysis for Preclinical and Clinical Applications

by
Alessandro Stefano
1,*,
Federica Vernuccio
2 and
Albert Comelli
3
1
Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
2
Department of Radiology, University Hospital of Padova, Via Nicolò Giustiniani 2, 35128 Padova, Italy
3
Ri.MED Foundation, Via Bandiera 11, 90133 Palermo, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(15), 7513; https://doi.org/10.3390/app12157513
Submission received: 19 July 2022 / Accepted: 25 July 2022 / Published: 26 July 2022
Preclinical and clinical imaging aims to characterize and measure biological processes and diseases in animals [1] and humans [2]. In recent years, there has been growing interest in the quantitative analysis of clinical images using techniques such as positron emission tomography (PET) [3], computerized tomography (CT) [4], and magnetic resonance imaging (MRI) [5], mainly applied to texture analysis and radiomics. Various image processing and analysis algorithms based on pattern recognition, artificial intelligence, and computer graphics methods have been proposed to extract features from biomedical images. These quantitative approaches are expected to have a positive clinical impact on quantitatively analyzing images, to reveal biological processes and diseases, and to predict response to treatment.
This Special Issue presents a collection of high-quality studies covering state-of-the-art and innovative approaches focusing on image processing and analysis across a variety of imaging modalities as well as the expected clinical applicability of these innovative approaches for personalized patient-tailored medicine.
The topics/keywords covered by this Special Issue includes the following:
  • In vivo imaging;
  • Therapy response prediction;
  • Medical diagnosis support systems;
  • Detection, segmentation, and classification of tissues;
  • Biomedical image analysis and processing;
  • Personalized medicine;
  • Artificial intelligence;
  • Texture analysis;
  • Radiomics.
In response to the call for papers, nineteen papers were submitted to this Special Issue, of which fourteen were accepted for publication. These papers address several research challenges related to image processing and analysis in both preclinical and clinical applications.
Among the published research papers, five of them focus on segmentation and detection applications, including prostate gland segmentation [6,7], retroperitoneal sarcoma segmentation [8], basal cell carcinoma detection [9], and fracture detection in patients with maxillofacial trauma [10].
In one of these papers, the authors estimated prostate volume using ultrasound imaging, which offers many advantages such as portability, low cost, lack of ionizing radiations, and suitability for real-time operation [6]. Since experts usually consider automatic end-to-end volume-estimation procedures as non-transparent and uninterpretable systems, the authors proposed a system that directly estimated the diameter parameters of the standard ellipsoid formula to produce the prostate volume in a dataset of 305 patients. The proposed system detects four diameter endpoints from the transverse images and two diameter endpoints from the sagittal images, as defined by the classical procedure. These endpoints are estimated using a new image-patch voting method to address characteristic problems of ultrasound images. Furthermore, the dataset included 75 MRI images of the initial 305 patients. The results showed optimal performance, confirming that this system can be used in clinical practice.
Another prostate gland segmentation method based on T2-weighted MRI was proposed by Comelli et al. [7]. The authors presented the efficient neural network (ENet) to tackle the fully automated, real-time, and 3D delineation process of the prostate. ENet is mainly applied in self-driving cars to compensate for limited hardware availability while still achieving accurate segmentation. The authors applied this network to a limited set of 85 manual prostate segmentations using the k-fold validation strategy and the Tversky loss function [11] and compared the results with UNet and ERFNet (efficient residual factorized convNet). The results showed that ENet and UNet were more accurate than ERFNet, with ENet much faster than UNet. Specifically, ENet obtains a dice similarity coefficient of 90.89% and a segmentation time of about 6 s using central processing unit (CPU) hardware to simulate real clinical conditions where the graphics processing unit (GPU) is not always available.
In a similar study, Salvaggio et al. [8] used ENet and ERFNet for the automatic segmentation of retroperitoneal sarcoma (RPS) in 94 CT examinations. The volume estimation of RPS is often difficult due to its huge dimensions and irregular shape; thus, it often requires manual segmentation, which is time-consuming and operator-dependent. For this reason, the authors assessed the existence of significant differences between manual segmentation performed by two radiologists and automatic segmentation based on ENet and ERFNet using analysis of variance (ANOVA). A set of performance indicators for the shape comparison were calculated, namely sensitivity, positive predictive value, dice similarity coefficient, volume overlap error, and volumetric differences. There were no significant differences found between the RPS volumes obtained using manual segmentation and deep learning methods. Furthermore, all performance indicators were optimal for both ENet and ERFNet. Finally, ENet took around 15 s for segmentation versus 13 s for ERFNet by using GPU. In the case of CPU, ENet took around 2 min versus 1 min for ERFNet. The manual approach required approximately one hour per segmentation. In conclusion, fully automatic deep learning networks were reliable methods for RPS volume assessment.
Vélez et al. [9] proposed a tool for the detection of basal cell carcinoma (BCC) to provide a prioritization in the tele-dermatology consultation. BCC is the most frequent skin cancer, and its increasing incidence is producing a high overload in dermatology services. The authors analyzed if pre-segmentation of the lesion improved the classification of the lesion. After that, they analyzed three deep neural networks to distinguish between BCC and nevus, or other skin lesions. The best segmentation results were obtained with SegNet with accuracies of 98% and 95% for distinguishing BCC from nevus and other skin lesions, respectively. This method outperformed the winner of the challenge International Skin Imaging Collaboration (ISIC) 2019. Furthermore, the authors concluded that when deep neural networks are used to classify, a pre-segmentation of the lesion does not improve the classification results.
Finally, a novel maxillofacial fracture detection system (MFDS), based on convolutional neural networks and transfer learning, was proposed by Amodeo et al. [10] to detect traumatic fractures in patients. A convolutional neural network pre-trained on non-medical images was re-trained and fine-tuned using 148 CT images to produce a model for the classification of future CTs as fracture or not fracture. The validation and test datasets were characterized by 30 patients: both datasets contained 5 patients without fractures and 25 with fractures. The results showed an accuracy of 80% in classifying the maxillofacial fractures. Consequently, the proposed model can be used as a care support, reducing the risk of human error, preventing patient harm by minimizing diagnostic delays, and reducing the incongruous burden of hospitalization.
Among the other research papers, three of them focus on radiomics applications, including restaging in metastatic colorectal cancer [12], evaluating the robustness of PET radiomics features after MRI co-registration [13], and predicting pathologic complete response after neoadjuvant chemoradiation therapy for rectal cancer [14].
Alongi et al. [12] investigated the application of [18F]FDG PET/CT image-based textural features analysis to early predict disease progression and survival outcome in 52 metastatic colorectal cancer (MCC) patients after first adjuvant therapy. For this purpose, radiomics features from PET and low-dose CT images were extracted. The hybrid descriptive-inferential method [15] was used for feature selection while the discriminant analysis [16] was used for the predictive model implementation. The prediction performance was evaluated for per-lesion analysis, per-patient analysis, and per liver lesions analysis. All results showed that the proposed radiomics model was feasible and potentially useful in the predictive evaluation of disease progression in MCC.
Stefano et al. [13] studied the variability in PET radiomics features under the impact of co-registration with MRI using the difference percentage coefficient and the Spearman’s correlation coefficient for three groups of images: (i) original PET, (ii) PET after co-registration with T1-weighted MRI, and (iii) PET after co-registration with FLAIR MRI. For this purpose, 77 patients with brain cancers undergoing [11C]-Methionine PET were considered. Successively, PET images were co-registered with MRI sequences and 107 features were extracted for each mentioned group of images. The variability analysis revealed that shape features, first-order features, and two subgroups of higher-order features possessed a good robustness, unlike the remaining groups of features, which showed large differences in the difference percentage coefficient. Furthermore, using Spearman’s correlation coefficient, approximately 40% of the selected features differed from the three mentioned groups of images. This is an important consideration for users conducting radiomics studies with image co-registration constraints to avoid errors in cancer diagnosis, prognosis, and clinical outcome prediction.
Lee et al. [14] evaluated the MRI assessment after neoadjuvant chemoradiotherapy (nCRT) in 912 patients with rectal cancer for staging and treatment planning purposes. They proposed a pathologic complete response (pCR) prediction method based on a novel multi-parametric MRI embedding technique. Specifically, multiple MRI sequences were encapsulated into multi-sequence fusion images (MSFI). Subsequently, radiomics features were extracted and used to predict pCR through a random forest classifier. The results demonstrated that the use of all given MRI sequences is the most effective method regardless of the dimension reduction method. Furthermore, it outperformed four competing baselines in terms of the area under the receiver operating characteristic curve (AUC) and F1-score.
Among the other research papers, four of them focus on biomedical image quantification, including the early monitoring response to therapy in patients with brain lesions [17], the quantification of cancer cell mass evolution in zebrafish [18], the clinical comparison of the glomerular filtration rate calculated from different renal depths and formulae [19], and the assessment of the left atrial flow stasis in patients undergoing pulmonary vein isolation for paroxysmal atrial fibrillation [20].
Stefano et al. [17] evaluated new PET prognostic indices for the early assessment of response to Gamma Knife (GK) treatment. GK is an alternative to traditional brain surgery and whole-brain radiation therapy for the treatment of tumors inaccessible through conventional treatments [21]. Semi-quantitative PET parameters currently used in the clinical setting can be affected by statistical fluctuation errors and/or cannot provide information on tumor extent and heterogeneity. To overcome these limitations, the cumulative standardized uptake value histogram (CSH) and AUC were considered as additional information on the response to GK treatment. Specifically, the absolute level of [11C]-Methionine (MET) uptake was measured and its heterogeneity distribution within PET lesions was evaluated by calculating the CSH and AUC. The results showed good agreement with patient outcomes, and since no relevant correlations were found between CSH and AUC and the indices usually used in PET imaging, these innovative parameters could be a useful tool for assessing patient responses to therapy.
In [18], the authors considered zebrafish as it is a model organism for the study of human cancer and, compared with the murine model, it has several properties that are ideal for personalized therapies. The transparency of the zebrafish embryos and the development of the pigment-deficient “casper” zebrafish line give the capacity to directly observe cancer formation and progression in the living animal. Nevertheless, the automatic quantification of cellular proliferation in vivo is still critical. For this reason, the authors proposed a new tool, namely ZFTool, to automatically quantify the cancer cellular evolution. ZFTool is capable of establishing a base threshold that eliminates the embryo autofluorescence, to automatically measure the area and intensity of green-fluorescent protein marked cells, and to define a proliferation index. As result, the proliferation index computed on different targets demonstrated the efficiency of ZFTool in providing a good automatic quantification of cancer mass evolution in zebrafish, eliminating the influence of its autofluorescence.
In the study proposed by Hsu et al. [19], the authors aimed to compare the differences in renal depths in a camera-based method using Technetium-99 m diethylenetriaminepentaacetic acid (Tc-99 m DTPA). This method is commonly used to calculate the glomerular filtration rate (GFR) as it can easily calculate split renal function. Renal depth is the main factor affecting the measurement of GFR accuracy. For this reason, the difference in renal depths between three formulae (Tonnesen’s, Itoh K’s, and Taylor’s) and a CT scan were compared and used to calculate the GFRs using four methods. For this purpose, 51 patients underwent a laboratory test within one month and a CT scan within two months. The results showed that the renal depths measured using the three formulae were smaller than those measured using the CT scan, and the right renal depth was always larger than the left.
In [20], the authors aimed to demonstrate that left atrial (LA) stasis, derived from 4D-flow, is a useful biomarker of LA recovery in patients with atrial fibrillation (AF). AF is associated with systemic thrombo-embolism and stroke events, which do not appear significantly reduced following successful pulmonary vein (PV) ablation. The authors’ hypothesis was that LA recovery was associated with a reduction in LA stasis. For this purpose, 148 subjects with paroxysmal AF and 24 controls were recruited and underwent a cardiac MRI, inclusive of 4D-flow. LA was isolated within the 4D-flow dataset to constrain stasis maps. The results showed that the mean LA stasis in the control was lower than that in the pre-ablation cohort and that the mean LA stasis was reduced in the post-ablation cohort compared with in the pre-ablation cohort. The study demonstrated that 4D flow-derived LA stasis mapping was clinically relevant and revealed stasis changes in the LA body pre- and post-pulmonary vein ablation.
Finally, the last two published studies concern an image registration technique based on local feature of retinal vessels [22], and the hardware optimizations of the X-ray pre-processing using the field programmable gate array (FPGA) [23].
In the first of these two studies [22], an innovative method, namely CURVE, is presented to accurately extract feature points on retinal vessels and throughout the fundus image. The CURVE performance was tested on different datasets and compared with six state-of-the-art feature extraction methods. The results showed that the feature extraction accuracy of CURVE significantly outperformed the existing feature extraction methods. Then, CURVE was paired with a scale-invariant feature transform (SIFT) descriptor to test its registration capability on the fundus image registration (FIRE) dataset. CURVE-SIFT successfully registered 44% of the image pairs while existing feature-based techniques registered less than 27% of the image pairs.
The last study [23] proposed the optimization of the X-ray pre-processing in CT imaging to compute total attenuation projections by avoiding the intermediate step of converting detector data to intensity images. Furthermore, a configurable hardware architecture for data acquisition systems on FPGAs was proposed to fulfill the real-time requirements and with the aim of achieving “on-the-fly” pre-processing of 2D projections. Finally, this architecture was configured for exploring and analyzing different arithmetic representations, such as floating-point and fixed-point data formats. In this way, the best representation and data format that minimized execution time and hardware costs was found without affecting image quality. By comparing the proposed solution with the state-of-the-art pre-processing algorithm, the latency decreased by 4.125× and the resource utilization decreased by ∼6.5×. By using fixed-point representation in the different data precisions, the latency and the resource utilization were further decreased.
In conclusion, this Special Issue covers recent trends in biomedical imaging applications, such as quantification, detection, radiomics, registration, and optimization, constituting a good sample of the current state-of-the-art results in this field.

Author Contributions

Conceptualization, A.S., F.V. and A.C.; methodology, A.S.; resources, F.V. and A.C.; data curation, A.S.; writing—original draft preparation, A.S.; writing—review and editing, A.S.; supervision, A.S.; project administration, A.S., F.V. and A.C.; funding acquisition, A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Benfante, V.; Stefano, A.; Comelli, A.; Giaccone, P.; Cammarata, F.P.; Richiusa, S.; Scopelliti, F.; Pometti, M.; Ficarra, M.; Cosentino, S.; et al. A New Preclinical Decision Support System Based on PET Radiomics: A Preliminary Study on the Evaluation of an Innovative 64Cu-Labeled Chelator in Mouse Models. J. Imaging 2022, 8, 92. [Google Scholar] [CrossRef] [PubMed]
  2. Stefano, A.; Comelli, A. Customized efficient neural network for covid-19 infected region identification in ct images. J. Imaging 2021, 7, 131. [Google Scholar] [CrossRef] [PubMed]
  3. Banna, G.L.; Anile, G.; Russo, G.; Vigneri, P.; Castaing, M.; Nicolosi, M.; Strano, S.; Gieri, S.; Spina, R.; Patanè, D.; et al. Predictive and Prognostic Value of Early Disease Progression by PET Evaluation in Advanced Non-Small Cell Lung Cancer. Oncology 2017, 92, 39–47. [Google Scholar] [CrossRef] [PubMed]
  4. Stefano, A.; Gioè, M.; Russo, G.; Palmucci, S.; Torrisi, S.E.; Bignardi, S.; Basile, A.; Comelli, A.; Benfante, V.; Sambataro, G.; et al. Performance of Radiomics Features in the Quantification of Idiopathic Pulmonary Fibrosis from HRCT. Diagnostics 2020, 10, 306. [Google Scholar] [CrossRef]
  5. Cutaia, G.; la Tona, G.; Comelli, A.; Vernuccio, F.; Agnello, F.; Gagliardo, C.; Salvaggio, L.; Quartuccio, N.; Sturiale, L.; Stefano, A.; et al. Radiomics and prostate MRI: Current role and future applications. J. Imaging 2021, 7, 34. [Google Scholar] [CrossRef]
  6. Albayrak, N.B.; Akgul, Y.S. Estimation of the Prostate Volume from Abdominal Ultrasound Images by Image-Patch Voting. Appl. Sci. 2022, 12, 1390. [Google Scholar] [CrossRef]
  7. Comelli, A.; Dahiya, N.; Stefano, A.; Vernuccio, F.; Portoghese, M.; Cutaia, G.; Bruno, A.; Salvaggio, G.; Yezzi, A. Deep learning-based methods for prostate segmentation in magnetic resonance imaging. Appl. Sci. 2021, 11, 782. [Google Scholar] [CrossRef]
  8. Salvaggio, G.; Cutaia, G.; Greco, A.; Pace, M.; Salvaggio, L.; Vernuccio, F.; Cannella, R.; Algeri, L.; Incorvaia, L.; Stefano, A.; et al. Deep Learning Networks for Automatic Retroperitoneal Sarcoma Segmentation in Computerized Tomography. Appl. Sci. 2022, 12, 1665. [Google Scholar] [CrossRef]
  9. Vélez, P.; Miranda, M.; Serrano, C.; Acha, B. Does a Previous Segmentation Improve the Automatic Detection of Basal Cell Carcinoma Using Deep Neural Networks? Appl. Sci. 2022, 12, 2092. [Google Scholar] [CrossRef]
  10. Amodeo, M.; Abbate, V.; Arpaia, P.; Cuocolo, R.; Orabona, G.D.; Murero, M.; Parvis, M.; Prevete, R.; Ugga, L. Transfer learning for an automated detection system of fractures in patients with maxillofacial trauma. Appl. Sci. 2021, 11, 6293. [Google Scholar] [CrossRef]
  11. Comelli, A.; Dahiya, N.; Stefano, A.; Benfante, V.; Gentile, G.; Agnese, V.; Raffa, G.M.; Pilato, M.; Yezzi, A.; Petrucci, G.; et al. Deep learning approach for the segmentation of aneurysmal ascending aorta. Biomed. Eng. Lett. 2021, 11, 15–24. [Google Scholar] [CrossRef] [PubMed]
  12. Alongi, P.; Stefano, A.; Comelli, A.; Spataro, A.; Formica, G.; Laudicella, R.; Lanzafame, H.; Panasiti, F.; Longo, C.; Midiri, F.; et al. Artificial Intelligence Applications on Restaging [18F]FDG PET/CT in Metastatic Colorectal Cancer: A Preliminary Report of Morpho-Functional Radiomics Classification for Prediction of Disease Outcome. Appl. Sci. 2022, 12, 2941. [Google Scholar] [CrossRef]
  13. Stefano, A.; Leal, A.; Richiusa, S.; Trang, P.; Comelli, A.; Benfante, V.; Cosentino, S.; Sabini, M.G.; Tuttolomondo, A.; Altieri, R.; et al. Robustness of pet radiomics features: Impact of co-registration with mri. Appl. Sci. 2021, 11, 10170. [Google Scholar] [CrossRef]
  14. Lee, S.; Lim, J.; Shin, J.; Kim, S.; Hwang, H. Pathologic Complete Response Prediction after Neoadjuvant Chemoradiation Therapy for Rectal Cancer Using Radiomics and Deep Embedding Network of MRI. Appl. Sci. 2021, 11, 9494. [Google Scholar] [CrossRef]
  15. Barone, S.; Cannella, R.; Comelli, A.; Pellegrino, A.; Salvaggio, G.; Stefano, A.; Vernuccio, F. Hybrid descriptive-inferential method for key feature selection in prostate cancer radiomics. Appl. Stoch. Model. Bus. Ind. 2021, 37, 961–972. [Google Scholar] [CrossRef]
  16. Stefano, A.; Comelli, A.; Bravatà, V.; Barone, S.; Daskalovski, I.; Savoca, G.; Sabini, M.G.; Ippolito, M.; Russo, G. A preliminary PET radiomics study of brain metastases using a fully automatic segmentation method. BMC Bioinform. 2020, 21, 325. [Google Scholar] [CrossRef] [PubMed]
  17. Stefano, A.; Pisciotta, P.; Pometti, M.; Comelli, A.; Cosentino, S.; Marletta, F.; Cicero, S.; Sabini, M.G.; Ippolito, M.; Russo, G. Early monitoring response to therapy in patients with brain lesions using the cumulative SUV histogram. Appl. Sci. 2021, 11, 2999. [Google Scholar] [CrossRef]
  18. Carreira, M.J.; Vila-Blanco, N.; Cabezas-Sainz, P.; Sánchez, L. Zftool: A software for automatic quantification of cancer cell mass evolution in zebrafish. Appl. Sci. 2021, 11, 7721. [Google Scholar] [CrossRef]
  19. Hsu, W.L.; Chang, S.M.; Chang, C.C. Clinical Comparison of the Glomerular Filtration Rate Calculated from Different Renal Depths and Formulae. Appl. Sci. 2022, 12, 698. [Google Scholar] [CrossRef]
  20. Sheitt, H.; Kim, H.; Wilton, S.; White, J.A.; Garcia, J. Left atrial flow stasis in patients undergoing pulmonary vein isolation for paroxysmal atrial fibrillation using 4d-flow magnetic resonance imaging. Appl. Sci. 2021, 11, 5432. [Google Scholar] [CrossRef]
  21. Stefano, A.; Vitabile, S.; Russo, G.; Ippolito, M.; Marletta, F.; D’Arrigo, C.; D’Urso, D.; Sabini, M.G.; Gambino, O.; Pirrone, R.; et al. An automatic method for metabolic evaluation of gamma knife treatments. In Lecture Notes in Computer Science, Proceedings of the 18th International Conference, Genoa, Italy, 7–11 September 2015; Murino, V., Puppo, E., Eds.; International Publishing: Cham, Switzerland, 2015; Volume 9279, pp. 579–589. [Google Scholar]
  22. Ramli, R.; Hasikin, K.; Idris, M.Y.I.; Karim, N.K.A.; Wahab, A.W.A. Fundus image registration technique based on local feature of retinal vessels. Appl. Sci. 2021, 11, 11201. [Google Scholar]
  23. Passaretti, D.; Ghosh, M.; Abdurahman, S.; Egito, M.L.; Pionteck, T. Hardware Optimizations of the X-ray Pre-Processing for Interventional Computed Tomography Using the FPGA. Appl. Sci. 2022, 12, 5659. [Google Scholar]

Short Biography of Authors

Alessandro Stefano is a Research Scientist at the Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR) of Cefalù. He received his BS degree and his PhD in Engineering in Computer Science from the University of Palermo, Italy, in 2005 and 2016, respectively. Currently, his research interests include medical image processing and analysis, in particular for non-invasive imaging techniques, such as positron emission tomography (PET), computerized tomography (CT), and magnetic resonance (MR); radiomics; and artificial intelligence in clinical health care applications. He is the author of more than 80 scientific papers in peer-reviewed journals and international conference proceedings.
 
Federica Vernuccio achieved her degree in Medicine at the University of Palermo in 2012 and specialization in Radiology in 2018. In 2019, she achieved an Italian scientific certification for Associate Professor in Radiology. As a radiologist, she worked as an abdominal radiologist at the University Hospital of Palermo from 2020 to November 2021. From November 15, 2021, she became a staff radiologist at the Radiology Department of the University Hospital of Padova. She won more than 10 awards in international conferences and wrote more than 70 publications as full-length papers in international journals with impact factors, including Radiology, AJR and European Radiology. Her main clinical and research interests are hepato-biliary tumors. She is a strong supporter of equity, diversity, and equality in academics.
 
Albert Comelli is currently a Researcher/Scientist in Biomedical Image Processing and Analysis at the Ri.MED Foundation, Palermo, Italy. He received a combined BSc/MSc degree in Computer Science at the University of Catania and a PhD in Computer Engineering at the University of Palermo. His research interests include biomedical image processing and analysis; radiomics; and machine and deep learning for developing personalized predictive and/or prognostic models to support the medical decision process in patients undergoing different imaging methods such as magnetic resonance, computer tomography, and positron emission tomography. He is the author of over 45 scientific papers in peer-reviewed journals and international conference proceedings.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stefano, A.; Vernuccio, F.; Comelli, A. Image Processing and Analysis for Preclinical and Clinical Applications. Appl. Sci. 2022, 12, 7513. https://doi.org/10.3390/app12157513

AMA Style

Stefano A, Vernuccio F, Comelli A. Image Processing and Analysis for Preclinical and Clinical Applications. Applied Sciences. 2022; 12(15):7513. https://doi.org/10.3390/app12157513

Chicago/Turabian Style

Stefano, Alessandro, Federica Vernuccio, and Albert Comelli. 2022. "Image Processing and Analysis for Preclinical and Clinical Applications" Applied Sciences 12, no. 15: 7513. https://doi.org/10.3390/app12157513

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop