Biomedical Image Processing and Classiﬁcation

Biomedical image processing is an interdisciplinary field [...]


Introduction
Biomedical image processing is an interdisciplinary field [1] that spreads its foundations throughout a variety of disciplines, including electronic engineering, computer science, physics, mathematics, physiology, and medicine. Several imaging techniques have been developed [2], providing many approaches to the study of the body, including X-rays for computed tomography, ultrasounds, magnetic resonance, radioactive pharmaceuticals used in nuclear medicine (for positron emission tomography and single-photon emission computed tomography), elastography, functional near-infrared spectroscopy, endoscopy, photoacoustic imaging, and thermography. Even bioelectric sensors, when using highdensity systems sampling a two dimensional surface (e.g., in electroencephalography or electromyography [3]), can provide data that can be studied by image processing methods. Biomedical image processing is finding an increasing number of important applications, for example, to make image segmentation of an organ to study its internal structure and to support the diagnosis of a disease or the selection of a treatment [4].
Classification theory is another well developed field of research [5] connected to machine learning, which is an important branch of artificial intelligence. Different problems have been addressed, from the supervised identification of a map relating input features to a desired output, to the exploration of data by unsupervised learning (cluster analysis, data mining) or online training through experience. The estimation of informative features and their further processing (by feature generation) and selection (either by filtering or with approaches wrapped to the classifier) are important steps, both to improve classification performance (avoiding overfitting) and to investigate the information provided by candidate features to the output of interest. Excellent results have also been recently documented by deep learning approaches [6], in which optimal features are automatically extracted in deep layers on the basis of training examples and then used for classification.
When classification methods are associated with image processing, computer-aided diagnosis (CAD) systems can be developed, e.g., for the identification of diseased tissue [7] or a specific lesion or malformation [4]. These results indicate interesting future prospects in supporting the diagnosis of diseases [8].

This Special Issue
The present issue consists of six papers on a few topics in the wide range of research fields covered by biomedical image processing and classification.
In [9], the authors have proposed a CAD system for identification and assessment of glomeruli from kidney tissue slides. Their approach is based on deep learning, exploiting convolutional neural network (CNN) architectures tailored for the semantic segmentation task. The obtained results are promising, as also stated by expert pathologists. Moreover, the proposed system can easily be integrated into the existing pathologists' workflow thanks to an XML interface with Aperio ImageScope [10].
With the recent advances of techniques in digitalized scanning, tissue histopathology slides can be stored in the form of digital images [11]. In recent years, many efforts have been devoted to developing automated classification and segmentation techniques with the aim of improving accuracy and efficiency in digital pathology [12]. In kidney transplantations, pathologists evaluate the architecture of renal structures to assess the nephron status. An accurate evaluation of vascular and stromal injury is crucial for determining kidney acceptance, which is currently based on the pathologists' histological evaluations on renal biopsies in addition to clinical data. In this context, automated algorithms may offer crucial support to histopathological image analysis. An example is given in this Special Issue [13].
Although the performance of a machine learning algorithm depends on the amount of available data, few studies have explored the minimal amount of data required to train a CNN in medical deep learning or the possibility of having scarce annotations [14]. An innovative contribution is given in this Special Issue [15]. The paper explores the minimum number of patients required to train a U-Net that accurately segments the prostate on T2-weighted MRI images. A U-Net was trained on patient numbers that ranged from 8 to 320 and its performance was measured. The Dice score significantly increased from training sizes of 8 to 120 patients and then plateaued with minimal improvement after 160 cases. This study suggests that modest dataset sizes could be sufficient to segment other organs effectively as well.
The correlation between conjunctival pallor (on physical examinations) and anemia paved the way for new non-invasive methods for monitoring and identifying the potential risks of this important pathology. A critical research challenge for this task is represented by designing a reliable automated segmentation procedure for the eyelid conjunctiva. A graph partitioning segmentation approach is proposed in [16], exploiting normalized cuts for perceptual grouping, thereby introducing a bias towards spectrophotometry features of hemoglobin. The segmentation task has been further investigated by a subsequent work, proposing a deep-learning-based approach involving a deconvolutional neural network [17]. The overall pipeline for building a reliable estimator is composed of several smaller tasks having multiple research challenges [18,19]. For instance, starting from the digital image capturing phase, the process is affected by heterogeneous ambient lighting conditions and intrinsic color balancing techniques by the device [20].
An efficient framework for enhancing and segmenting brain MRIs to identify a tumor is discussed in [21]. The hybridized fuzzy clustering and distance regularized level set (DRLS) technique effectively extracted the region of interest (ROI) in the brain slices. For identifying the ROI, fuzzy clustering was employed by selecting the number of clusters k, validated using the silhouette metric. In post-processing, the ROI mining techniques, marker controlled watershed segmentation, seed region growing and DRLS were adopted to extract the anomalous section from the segmented objects [22,23]. Tumor volume computation and 3D-modeling of the clinical dataset abnormalities were performed using the physical spacing metadata available in the header of the DICOM images considered. This can help physicists locate the tumor and determine other information (e.g., size and shape) during initial diagnosis, and thereby the process of treating the tumor may be enhanced.
Finally, one paper in this Special Issue has addressed the problem of identifying the volume status of patients [24]. The method was developed within a long-standing research activity on the automated investigation of the pulsatility of the inferior vena cava (IVC) from ultrasound measurements. The clinical approach is based on the subjective choice of a fixed direction along which to investigate IVC pulsations. However, the vein may have a complicated shape and show respirophasic movements, which introduce uncertainties into the clinical evaluation. Two automated methods have been introduced to delineate the IVC edges along sections either transverse or longitudinal to the blood vessel [25][26][27]. Preliminary results have shown the importance of using these automated methods to obtain more repeatable, reliable, and accurate information on IVC pulsatility than when using subjective clinical methods [28][29][30][31]. In this Special Issue, the two views are used to extract features that, integrated by a classification algorithm, can result in improved performance in diagnosing the volemic status of patients [24].

Future Perspectives
The research fields of biomedical image processing and classification have reached high levels of insight. Their integration into CAD systems can greatly contribute to supporting medical doctors to refine their clinical picture. In the near future, further growth in contributions to this field is expected; for example, taking advantage of increasing digitalization, deep learning has the potential to provide efficient solutions to many medical problems.
However, the real challenge is to bring an increasing number of systems into the hands of doctors, so that they can be applied to patients. This requires leaving the laboratory, engineering the systems, certifying the products, and identifying the correct target market that can accommodate the new devices and allow adequate support for these activities. In order to speed up this innovation process, the collaboration between researchers, institutions, funders, and entrepreneurs is always more important. The "do-it-all-yourself" approach only makes sense in a world of scarce external knowledge, but today knowledge is spread as it has never been before. Thus, in order to improve the wellness of the whole community [32], a dynamic environment in which new high-impact solutions can be created will be able to grow only if there is collaboration among organizations.
Funding: This research was carried out as part of the project "Method and apparatus to characterize non-invasive images containing venous blood vessels", funded through the PoC Instrument initiative, implemented by LINKS, with the support of LIFFT, with funds from Campagnia di San Paolo.