Karpinski Score under Digital Investigation: A Fully Automated Segmentation Algorithm to Identify Vascular and Stromal Injury of Donors’ Kidneys

: In kidney transplantations, the evaluation of the vascular structures and stromal areas is crucial for determining kidney acceptance, which is currently based on the pathologist’s visual evaluation. In this context, an accurate assessment of the vascular and stromal injury is fundamental to assessing the nephron status. In the present paper, the authors present a fully automated algorithm, called RENFAST (Rapid EvaluatioN of Fibrosis And vesselS Thickness), for the segmentation of kidney blood vessels and ﬁbrosis in histopathological images. The proposed method employs a novel strategy based on deep learning to accurately segment blood vessels, while interstitial ﬁbrosis is assessed using an adaptive stain separation method. The RENFAST algorithm is developed and tested on 350 periodic acid–Schi ﬀ (PAS) images for blood vessel segmentation and on 300 Massone’s trichrome (TRIC) stained images for the detection of renal ﬁbrosis. In the TEST set, the algorithm exhibits excellent segmentation performance in both blood vessels (accuracy: 0.8936) and ﬁbrosis (accuracy: 0.9227) and outperforms all the compared methods. To the best of our knowledge, the RENFAST algorithm is the ﬁrst fully automated method capable of detecting both blood vessels and ﬁbrosis in digital histological images. Being very fast (average computational time 2.91 s), this algorithm paves the way for automated, quantitative, and real-time kidney graft assessments.


Introduction
Kidney allograft transplant is experiencing a broad revolution, thanks to an increasing understanding of the pathologic mechanisms behind rejection and the introduction of new techniques and procedures for transplants [1]. The primary focus during kidney transplants has always been the identification, assessment, and treatment of allograft rejection. However, recently, a new issue has come to light: a shortage of donor organs. To solve this impasse, selection criteria were revised, leading to the so-called "expanded criteria donor" approach: kidneys that once would have been excluded because of the donors' clinical history or those deriving from deceased patients are nowadays carefully used [2,3].
In this context, the preimplantation evaluation of donors' kidneys has become more and more crucial. The pathologist's challenge is to recognize early signs of degeneration to "predict" the organs' functionality and performance. This analysis, usually based on periodic acid-Schiff (PAS) and trichrome (TRIC) staining, is focused on the glomeruli, tubules, vessels, and cortical parenchyma of the donor kidney, searching for glomerulosclerosis, tubule atrophy, vascular damage, or interstitial fibrotic replacement, respectively (Figure 1). The Karpinski score is then applied to grade the injury of the donor kidney. This score is based on a semiquantitative evaluation of the structures mentioned above. For each of the four compartments (glomeruli, tubules, blood vessels, and cortical parenchyma), the pathologist summarizes the evaluation in a four-grade score, ranging from 0 (absence of injury) to 3 (marked injuries); the total score is expressed out of 12 [4]. Notably, both arteries and arterioles are considered in vascular damage assessment, characterized by progressive thickening of their wall and shrinkage of their lumen. At the same time, the cortical parenchyma could be replaced by fibrous connective tissue [5,6]. The preimplantation kidney evaluation is a delicate, crucial activity for pathology laboratories. It is time-consuming, usually performed with urgency, and has a marked impact on the daily diagnostic routine. Moreover, the evaluation is operator-dependent, with a significant rate of interobserver difference [7]. In this challenging and evolving panorama, the introduction and application of an automated analysis algorithm would be of compelling importance.
In the last few years, several strategies have been proposed for the segmentation of kidney blood vessels and for the quantification of fibrotic tissue in biopsy images. Bevilacqua et al. [8] employed an artificial neural network (ANN) to detect blood vessels in histological kidney images. Lumen regions were firstly detected by applying fixed thresholding and morphological operators. Seeded region growing was then implemented to extract the membrane all around the segmented objects. Finally, a neural network based on Haralick texture features [9] was used to distinguish between blood vessels and tubular structures. Although well structured, this strategy suffers from several limitations. First, blood vessels with small or absent lumen cannot be segmented using the described approach. In addition, stain variability greatly influences the performance of the region growing, causing imprecise recognition of the blood vessel borders. Finally, the high variability in the shapes, dimensions, and textural characteristics of tubules seriously affects the classification provided by the network. Tey et al. [10] proposed an algorithm for the quantification of interstitial fibrosis (IF) based on color image segmentation and tissue structure identification in biopsy samples stained with Massone's trichrome (TRIC). All the renal structures were identified by employing color space transformations and structural feature extraction from the images. Then, the regions of fibrotic tissue were identified by removing all the non-fibrotic structures from the biopsy tissue area. This approach leads to fast identification of renal fibrotic tissue, but it is not free from limitations. First of all, there is a loss of information during the color space transformation and, in the presence of high stain variability, the method is not able to correctly classify all the renal structures. Moreover, being based The preimplantation kidney evaluation is a delicate, crucial activity for pathology laboratories. It is time-consuming, usually performed with urgency, and has a marked impact on the daily diagnostic routine. Moreover, the evaluation is operator-dependent, with a significant rate of inter-observer difference [7]. In this challenging and evolving panorama, the introduction and application of an automated analysis algorithm would be of compelling importance.
In the last few years, several strategies have been proposed for the segmentation of kidney blood vessels and for the quantification of fibrotic tissue in biopsy images. Bevilacqua et al. [8] employed an artificial neural network (ANN) to detect blood vessels in histological kidney images. Lumen regions were firstly detected by applying fixed thresholding and morphological operators. Seeded region growing was then implemented to extract the membrane all around the segmented objects. Finally, a neural network based on Haralick texture features [9] was used to distinguish between blood vessels and tubular structures. Although well structured, this strategy suffers from several limitations. First, blood vessels with small or absent lumen cannot be segmented using the described approach. In addition, stain variability greatly influences the performance of the region growing, causing imprecise recognition of the blood vessel borders. Finally, the high variability in the shapes, dimensions, and textural characteristics of tubules seriously affects the classification provided by the network. Tey et al. [10] proposed an algorithm for the quantification of interstitial fibrosis (IF) based on color image segmentation and tissue structure identification in biopsy samples stained with Massone's trichrome (TRIC). All the renal structures were identified by employing color space transformations and structural feature extraction from the images. Then, the regions of fibrotic tissue were identified by removing all the non-fibrotic structures from the biopsy tissue area. This approach leads to fast identification of renal fibrotic tissue, but it is not free from limitations. First of Electronics 2020, 9,1644 3 of 16 all, there is a loss of information during the color space transformation and, in the presence of high stain variability, the method is not able to correctly classify all the renal structures. Moreover, being based on the identification and subsequent removal of non-fibrotic regions from the tissue, an error in the segmentation of these structures causes inaccurate quantification of interstitial fibrosis. Fu et al. [11] proposed a convolutional neural network (CNN) for fibrotic tissue segmentation in atrial tissue stained with Massone's trichrome. The network, consisting of 11 convolutional layers, was trained on a three-class problem (background vs. fibrosis vs. myocytes), giving the RGB image as input and the corresponding manual mask as the target. This approach provides fast detection of fibrotic areas of the tissue but presents one major limitation: color variability. Stain variations may affect both the training of the network and the correct segmentation of fibrotic tissue, and every mis-segmentation error leads to incorrect detection and quantification of interstitial fibrosis.
In this paper, we present a novel method for the detection of blood vessels and for the quantification of interstitial fibrosis in kidney histological images. To the best of our knowledge, no automated solution has been proposed so far to cope with the issue of stain variability in PAS and TRIC images. Our approach employs a preprocessing stage specifically designed to address the problem of color variability. The proposed algorithm for the segmentation of vascular structures exploits a deep learning approach combined with the detection of cellular structures to accurately segment blood vessels in PAS stained images. Interstitial fibrosis is assessed using an adaptive stain separation method to detect all the fibrotic areas within the histological tissue.

Materials and Methods
Here we present an automated method called RENFAST (Rapid EvaluatioN of Fibrosis And vesselS Thickness). The RENFAST algorithm is a deep-learning-based method for the segmentation of renal blood vessels and fibrosis. A flowchart of the proposed method is sketched in Figure 2. In the following sections, a detailed description of the algorithm is provided.
Electronics 2020, 9, x FOR PEER REVIEW 3 of 17 on the identification and subsequent removal of non-fibrotic regions from the tissue, an error in the segmentation of these structures causes inaccurate quantification of interstitial fibrosis. Fu et al. [11] proposed a convolutional neural network (CNN) for fibrotic tissue segmentation in atrial tissue stained with Massone's trichrome. The network, consisting of 11 convolutional layers, was trained on a three-class problem (background vs. fibrosis vs. myocytes), giving the RGB image as input and the corresponding manual mask as the target. This approach provides fast detection of fibrotic areas of the tissue but presents one major limitation: color variability. Stain variations may affect both the training of the network and the correct segmentation of fibrotic tissue, and every mis-segmentation error leads to incorrect detection and quantification of interstitial fibrosis.
In this paper, we present a novel method for the detection of blood vessels and for the quantification of interstitial fibrosis in kidney histological images. To the best of our knowledge, no automated solution has been proposed so far to cope with the issue of stain variability in PAS and TRIC images. Our approach employs a preprocessing stage specifically designed to address the problem of color variability. The proposed algorithm for the segmentation of vascular structures exploits a deep learning approach combined with the detection of cellular structures to accurately segment blood vessels in PAS stained images. Interstitial fibrosis is assessed using an adaptive stain separation method to detect all the fibrotic areas within the histological tissue.

Materials and Methods
Here we present an automated method called RENFAST (Rapid EvaluatioN of Fibrosis And vesselS Thickness). The RENFAST algorithm is a deep-learning-based method for the segmentation of renal blood vessels and fibrosis. A flowchart of the proposed method is sketched in Figure 2. In the following sections, a detailed description of the algorithm is provided. Figure 2. Flowchart of the RENFAST (Rapid EvaluatioN of Fibrosis And vesselS Thickness) algorithm for vessel and fibrosis segmentation. The first row illustrates the pipeline for blood vessel detection, while the second row shows the workflow of fibrosis segmentation. After PAS (periodic acid-Schiff) color normalization, blood vessels are detected using a deep learning method (CNN) and ad hoc postprocessing. Kidney fibrosis is segmented through TRIC (Massone's trichrome) normalization followed by adaptive stain separation.

Database Description
The whole slide images (WSIs) of kidney biopsy specimens of 65 patients (median age 51 years, range 29-74 years) were used for this work; these were collected at the Division of Pathology, AOU Città della Salute e della Scienza Hospital, Turin, Italy and then anonymized. The pathology laboratory managed the biopsied samples of each kidney according to the kidney transplant biopsy's internal protocol. The tissue was fixed with Serra fixative and then processed in an urgency regimen using a microwave processor or LOGOS J processor (Milestone, Bergamo, Italy). Samples were then paraffin-embedded and serially sectioned (5 µm), mounted onto adhesive slides, and stained with PAS and TRIC. Finally, all the slides produced were scanned with a Hamamatsu NanoZoomer S210 Digital slide scanner (Turin, Italy), providing a magnification of ×100 (conversion factor: 0.934 µm/pixel). For each patient (n = 65), an expert pathologist (A.B.) manually extracted 10 images with dimensions of 512 × 512 pixels, for a total of 650 images. After consensus, manual annotations of blood vessels and fibrosis were generated by two operators (A.G. and L.M.). Table 1 shows the overall dataset composition. The image dataset, along with the annotations, is available at https://data.mendeley.com/ datasets/m2t49zf6xr/1.

Stain Normalization
The proposed algorithm employs a specific preprocessing stage, called stain normalization, to reduce the color variability of the histological samples. Previous studies have shown that stain variability significantly affects the performance of automatic algorithms in digital pathology [12,13]. The procedure of stain normalization allows for transforming a source image I into another image I NORM , through the operation I NORM = f (I, I REF ), where I REF is a reference image and f (·) is the function that applies the color intensities of I REF to the source image [14]. The reference image is chosen by the pathologist as the image with the most optimal tissue staining and visual appearance. For each image of the dataset, the RENFAST algorithm applies the same stain normalization method that we developed in our previous work [15]. First, the image is converted to the optical density space (OD) where the relationship between stain concentration and light intensity is linear. The algorithm then estimates the stain color appearance matrix (W) and the stain density map (H) for both the source and reference images. In order to apply the normalization, the stain density map of the source image is adjusted using the following equation: where (·) SOURCE and (·) REF denote the source and reference images, respectively. Finally, the normalized image is converted back from the OD space to RGB. Figure 3 illustrates the color normalization process for sample PAS and TRIC images.

Deep Network Architecture
After stain normalization, the first step performed by the RENFAST algorithm is semantic segmentation using a convolutional neural network (CNN). To perform blood vessel segmentation, a UNET architecture with ResNet34 backbone [16] is employed using the Keras framework. The overall network architecture is shown in Figure 4. This network consists of an encoder structure that downsamples the spatial resolution of the input image through convolutional operations, to obtain a low-resolution feature mapping. These features are then resampled by a decoding structure to obtain a pixel-wise prediction of the same size of the input image. The output of the network is a probability map that assigns to each pixel a probability of belonging to a specific class. The entire network is trained on a three-class problem, giving the 512 × 512 RGB images as input and the corresponding labeled masks as the target. In each image of the dataset, pixels are labeled in three classes: (i) background, (ii) blood vessel, and (iii) blood vessel boundaries. To solve the problem of class imbalance, our network's loss function is class-weighted by taking into account how frequently a class occurs in the training set. This means that the least-represented class will have a greater contribution than a more represented one during the weight update. The class weight is computed as follows: where N is the total number of images and is the class frequency of generic class X.

Deep Network Architecture
After stain normalization, the first step performed by the RENFAST algorithm is semantic segmentation using a convolutional neural network (CNN). To perform blood vessel segmentation, a UNET architecture with ResNet34 backbone [16] is employed using the Keras framework. The overall network architecture is shown in Figure 4. This network consists of an encoder structure that downsamples the spatial resolution of the input image through convolutional operations, to obtain a low-resolution feature mapping. These features are then resampled by a decoding structure to obtain a pixel-wise prediction of the same size of the input image. The output of the network is a probability map that assigns to each pixel a probability of belonging to a specific class. The entire network is trained on a three-class problem, giving the 512 × 512 RGB images as input and the corresponding labeled masks as the target. In each image of the dataset, pixels are labeled in three classes: (i) background, (ii) blood vessel, and (iii) blood vessel boundaries. To solve the problem of class imbalance, our network's loss function is class-weighted by taking into account how frequently a class occurs in the training set. This means that the least-represented class will have a greater contribution than a more represented one during the weight update. The class weight is computed as follows: where N is the total number of images and f classX is the class frequency of generic class X.  The encoding network was pre-trained on ILSVRC 2012 ImageNet [17]. During the training process, only the decoder weights were updated, while the encoder weights were set to non-trainable. This strategy allows for exploiting the knowledge acquired from a previous problem (ImageNet) and using the features learned to solve a new problem (vessel segmentation). This approach is useful both to speed up the training process and to create a robust model even using fewer data. The training data are real-time augmented while passing through the network, applying the same random transformations (rotation, shifting, flipping) both to the input image and to the corresponding encoded mask. Real-time data augmentation allows us to increase the amount of data available without storing the transformed data in memory. This strategy makes the model more robust to slight variations and prevents the network from overfitting.
Our network (Figure 4) was trained on 300 images with a mini-batch size of 32 and categorical cross-entropy as a loss function. The Adam optimization algorithm was employed with an initial learning rate of 0.01. The maximum number of epochs was set to 50, with a validation patience of 10 epochs for early stopping of the training process.
To preserve the information near the boundaries of the image, the RENFAST algorithm applies a specific procedure to build the CNN softmax. Briefly, a mirror border is synthesized in each direction and a sliding window approach is employed to build the probability map. To give the reader the opportunity to observe the entire procedure, we added a detailed description along with a summary figure in Appendix A.

Blood Vessel Detection
Starting from the normalized RGB image (Figure 5a), the RENFAST algorithm applies the deep network described in the previous section. Figure 5b shows the probability map obtained from the CNN, in which the red and green areas represent the pixels inside and on the edge of the blood vessels, respectively. Then, our method detects all the white and nuclear regions within the image. All the unstained structures are segmented by thresholding the grayscale image of the PAS sample, while cell nuclei are detected using the object-based thresholding developed in our previous work [15]. Figure 5c illustrates the segmentation of cellular structures performed by the RENFAST algorithm. The encoding network was pre-trained on ILSVRC 2012 ImageNet [17]. During the training process, only the decoder weights were updated, while the encoder weights were set to non-trainable. This strategy allows for exploiting the knowledge acquired from a previous problem (ImageNet) and using the features learned to solve a new problem (vessel segmentation). This approach is useful both to speed up the training process and to create a robust model even using fewer data. The training data are real-time augmented while passing through the network, applying the same random transformations (rotation, shifting, flipping) both to the input image and to the corresponding encoded mask. Real-time data augmentation allows us to increase the amount of data available without storing the transformed data in memory. This strategy makes the model more robust to slight variations and prevents the network from overfitting.
Our network (Figure 4) was trained on 300 images with a mini-batch size of 32 and categorical cross-entropy as a loss function. The Adam optimization algorithm was employed with an initial learning rate of 0.01. The maximum number of epochs was set to 50, with a validation patience of 10 epochs for early stopping of the training process.
To preserve the information near the boundaries of the image, the RENFAST algorithm applies a specific procedure to build the CNN softmax. Briefly, a mirror border is synthesized in each direction and a sliding window approach is employed to build the probability map. To give the reader the opportunity to observe the entire procedure, we added a detailed description along with a summary figure in Appendix A.

Blood Vessel Detection
Starting from the normalized RGB image (Figure 5a), the RENFAST algorithm applies the deep network described in the previous section. Figure 5b shows the probability map obtained from the CNN, in which the red and green areas represent the pixels inside and on the edge of the blood vessels, respectively. Then, our method detects all the white and nuclear regions within the image. All the unstained structures are segmented by thresholding the grayscale image of the PAS sample, while cell nuclei are detected using the object-based thresholding developed in our previous work [15]. Figure 5c illustrates the segmentation of cellular structures performed by the RENFAST algorithm. To obtain initial detection of the vascular structures, the probability maps of the regions inside and on the border of the blood vessels are added together and thresholded with a fixed value of 0.35. Then, morphological closing with a disk of 3-pixel radius (equal to 2.80 μm) is carried out to obtain smoother contours. As can be seen from Figure 5d, this strategy leads to accurate detection of the blood vessel boundaries but does not allow the separation of touching structures. To overcome this problem, an additional processing stage is performed to divide clustered blood vessels. The RENFAST algorithm employs a four-step procedure to increase the contrast between each blood vessel's boundary and the background: 1. Inner region mask: thresholding (0.35) and level-set on the probability map of inner regions (red layer); 2. Boundary mask: thresholding (0.35) and level-set on the probability map of boundary regions (green layer); 3. New red layer of the softmax: subtraction of the boundary mask from the inner region mask; 4. New green layer of the softmax: skeleton of the boundary mask.
This procedure generates a softmax with a high SNR (signal-to-noise ratio) where the border of each blood vessel is clearly defined (Figure 5e). Finally, for each connected component of the initial mask (Figure 5d), a simple check is performed: if by subtracting the green layer of the high-SNR softmax (Figure 5e), more than one region is generated, these regions are dilated by 1 pixel and added to the final mask. In this way, the thickness lost during the subtraction is recovered while maintaining the blood vessels' separation. Otherwise, if no additional structure is created with the subtraction, the connected component is inserted directly into the final mask.
The last step of the RENFAST algorithm for vessel segmentation is a structural check on the segmented objects: All the regions with an area less than 180 μm 2 are erased as they are too small to be considered blood vessels. In addition, objects must have at least 2.5% and 5% of the area occupied by To obtain initial detection of the vascular structures, the probability maps of the regions inside and on the border of the blood vessels are added together and thresholded with a fixed value of 0.35. Then, morphological closing with a disk of 3-pixel radius (equal to 2.80 µm) is carried out to obtain smoother contours. As can be seen from Figure 5d, this strategy leads to accurate detection of the blood vessel boundaries but does not allow the separation of touching structures. To overcome this problem, an additional processing stage is performed to divide clustered blood vessels. The RENFAST algorithm employs a four-step procedure to increase the contrast between each blood vessel's boundary and the background:

1.
Inner region mask: thresholding (0.35) and level-set on the probability map of inner regions (red layer); 2.
Boundary mask: thresholding (0.35) and level-set on the probability map of boundary regions (green layer); 3.
New red layer of the softmax: subtraction of the boundary mask from the inner region mask; 4.
New green layer of the softmax: skeleton of the boundary mask.
This procedure generates a softmax with a high SNR (signal-to-noise ratio) where the border of each blood vessel is clearly defined (Figure 5e). Finally, for each connected component of the initial mask (Figure 5d), a simple check is performed: if by subtracting the green layer of the high-SNR softmax (Figure 5e), more than one region is generated, these regions are dilated by 1 pixel and added to the final mask. In this way, the thickness lost during the subtraction is recovered while maintaining the blood vessels' separation. Otherwise, if no additional structure is created with the subtraction, the connected component is inserted directly into the final mask.
The last step of the RENFAST algorithm for vessel segmentation is a structural check on the segmented objects: All the regions with an area less than 180 µm 2 are erased as they are too small to be Electronics 2020, 9, 1644 8 of 16 considered blood vessels. In addition, objects must have at least 2.5% and 5% of the area occupied by lumen and nuclei, respectively. With these structural checks, most of the false positives generated by the CNN are deleted. The final result provided by the proposed algorithm is shown in Figure 5f.

Fibrosis Segmentation
The RENFAST algorithm is also able to quantify interstitial fibrosis in TRIC images. After stain normalization (Section 2.2), our method detects all the uncolored regions to process only TRIC stained structures. The normalized TRIC image is first converted to grayscale and Weiner filtered. The resulting image is then thresholded using a value equal to 90% of the image maximum (Figure 6a). Since fibrosis is characterized by a greenish color, the proposed algorithm applies an adaptive stain separation as described in [15]. Thanks to the stain separation (Figure 6b), it is possible to divide the regions that may manifest fibrosis (green channel) from the structural component (red channel). Segmentation of these two channels is performed using an improved version of the MANA (Multiscale Adaptive Nuclei Analysis) algorithm [18]. After min-max scaling, custom object-based thresholding is applied to the green channel (fibrosis) and red channel obtained in the previous step. For each possible threshold point T ∈ [0, 1], the RENFAST algorithm computes the following energy function: where p 0 is the probability of having intensity values lower than T, p 1 is evaluated as 1 − p 0 , while var 0 and var 1 represent the variances of the probability functions of the two classes p 0 and p 1 . The threshold T associated with the maximum of the energy function E represents the optimal thresholding point. The result of green and red channel segmentation is illustrated in Figure 6c. All remaining pixels not associated with one of the binary masks (white, green, red) are included in the green or red mask based on where they have the highest intensity in the stain separation channel.
Electronics 2020, 9, x FOR PEER REVIEW 8 of 17 lumen and nuclei, respectively. With these structural checks, most of the false positives generated by the CNN are deleted. The final result provided by the proposed algorithm is shown in Figure 5f.

Fibrosis Segmentation
The RENFAST algorithm is also able to quantify interstitial fibrosis in TRIC images. After stain normalization (Section 2.2.), our method detects all the uncolored regions to process only TRIC stained structures. The normalized TRIC image is first converted to grayscale and Weiner filtered. The resulting image is then thresholded using a value equal to 90% of the image maximum (Figure 6a). Since fibrosis is characterized by a greenish color, the proposed algorithm applies an adaptive stain separation as described in [15]. Thanks to the stain separation (Figure 6b), it is possible to divide the regions that may manifest fibrosis (green channel) from the structural component (red channel). Segmentation of these two channels is performed using an improved version of the MANA (Multiscale Adaptive Nuclei Analysis) algorithm [18]. After min-max scaling, custom object-based thresholding is applied to the green channel (fibrosis) and red channel obtained in the previous step. For each possible threshold point ∈ 0,1 , the RENFAST algorithm computes the following energy function: where is the probability of having intensity values lower than T, is evaluated as 1 − , while and represent the variances of the probability functions of the two classes and . The threshold T associated with the maximum of the energy function E represents the optimal thresholding point. The result of green and red channel segmentation is illustrated in Figure 6c. All remaining pixels not associated with one of the binary masks (white, green, red) are included in the green or red mask based on where they have the highest intensity in the stain separation channel. Finally, the RENFAST algorithm quantifies the interstitial fibrosis as the ratio between the fibrotic area (segmented green channel) and the overall tissue area. Tissue detection is performed using an RGB high-pass filter [19] where the RGB color of each pixel is treated as a 3D vector. The strength of the edge is defined as the magnitude of the maximum gradient. The raw tissue mask is Finally, the RENFAST algorithm quantifies the interstitial fibrosis as the ratio between the fibrotic area (segmented green channel) and the overall tissue area. Tissue detection is performed using an RGB high-pass filter [19] where the RGB color of each pixel is treated as a 3D vector. The strength of the edge is defined as the magnitude of the maximum gradient. The raw tissue mask is generated by choosing a threshold equal to 5% of the maximum gradient. Morphological opening with a disk of 4-µm radius is then carried out to obtain the tissue contour (Figure 6d).

Performance Metrics
A comparison between manual and automatic masks was carried out to assess RENFAST's performance in the segmentation of kidney blood vessels and fibrosis. Manual annotations of blood vessels were generated using a custom graphical user interface based on MATLAB. Since fibrosis segmentation can be a long and demanding task, we designed a semi-automatic pipeline to help the pathologist during the generation of the manual mask (Appendix B). Several pixel-based metrics, such as balanced accuracy, precision, recall, and F1 SCORE , were evaluated for both blood vessel and fibrosis segmentation. Balanced accuracy (Bal ACCURACY ) is a common metric used in segmentation problems to deal with imbalanced datasets (TP vs. TN). Bal ACCURACY is calculated as the average of the correct predictions of each class individually. Precision is employed to evaluate the false detection of ghost shapes; recall quantifies the missed detection of ground truth objects; and finally, the F1 SCORE is defined as the harmonic mean between precision and recall.
Accurate segmentation of blood vessel borders is fundamental for a correct evaluation of vascular damage. For this reason, we also evaluated the Dice coefficient (DSC) and the Hausdorff distance for all the true-positive vascular structures. Specifically, we computed the 95th percentile Hausdorff distance (HD95), which is defined as the maximum distance of a set (manual boundary) to the nearest point in the other set (automatic boundary). This metric is more robust towards a very small subset of outliers because it is based on the calculation of the 95th percentile of distances. During fibrosis assessment, the pathologist computes the ratio between fibrotic tissue and the whole tissue area. For each image, the absolute error (AE) between manual and automatic estimation was calculated as where (·) MANUAL and (·) RENFAST denote the manual and the automatic annotations, respectively.

Results
The automatic results provided by the RENFAST method are compared herein both with manual annotations and with previously published works. For blood vessel segmentation, we compared our algorithm with the one proposed by Bevilacqua et al. [8], while we used the methods published by Tey et al. [10] and Fu et al. [11] as benchmarks for interstitial fibrosis segmentation. As datasets and manual annotations of these works are not publicly available, all the described methods were applied to the same dataset used in this paper. The processing was performed on a custom workstation with a 3.5 GHz 10-core CPU with 64 Gb of RAM (Turin, Italy).

Blood Vessel Detection
Both pixel-based metrics (Bal ACCURACY , precision, recall, F1 SCORE ) and object-based metrics (DSC, HD95) were calculated to assess the performance of the RENFAST algorithm. To demonstrate the superiority of our strategy, we also evaluated the results obtained using a simple two-class CNN (background vs. vessel) and a three-class CNN without our post-processing. Tables 2 and 3 summarize the metrics calculated for blood vessel detection.  Figure 2 but trained on two classes (background vs. vessel). 2 Same deep network of the RENFAST algorithm but without post-processing (Section 2.4). Regarding pixel-based metrics, our method achieved the best Bal ACCURACY , recall, and F1 SCORE for both the TRAIN and TEST sets. A large margin was achieved by RENFAST compared to the state-of-the-art techniques. Even more interesting, the post-processing adopted for blood vessel segmentation allowed a further increase in the overall performance of the single deep network (three-class CNN vs. RENFAST). The combination of the CNN probability map and cellular structure segmentation increased the DSC by up to 14.8% with respect to other methods. The accurate segmentation of blood vessel boundaries is also demonstrated by the lower HD95 value. Figure 7 shows a visual comparison between RENFAST and previously published works. Our approach managed to separate and correctly outline the boundaries of the blood vessels.

Fibrosis Segmentation
The same pixel-based metrics employed in the last section were calculated to evaluate the performance of RENFAST in fibrosis quantification (Table 4). To demonstrate the importance of the stain normalization as a preprocessing step, we also evaluated the performance of our algorithm without normalizing the images ("No norm.").
Electronics 2020, 9, x FOR PEER REVIEW 11 of 17 between RENFAST and previously published works. Our approach managed to separate and correctly outline the boundaries of the blood vessels.

Fibrosis Segmentation
The same pixel-based metrics employed in the last section were calculated to evaluate the performance of RENFAST in fibrosis quantification (Table 4). To demonstrate the importance of the stain normalization as a preprocessing step, we also evaluated the performance of our algorithm without normalizing the images ("No norm.").  As shown in Table 4, our strategy outperformed all the previously published methods. In addition, the stain normalization (Section 2.2) allowed a further increase in the overall performance of our method (No norm. vs. RENFAST algorithm). Finally, we evaluated the absolute errors (AEs) between the manual and automatic fibrosis quantification (Table 5). In both the TRAIN and TEST datasets, the RENFAST algorithm achieved the lowest average AEs (2.42% and 2.32%), with maximum AEs of 11.17% and 7.81%, respectively. Specifically, the maximum AE obtained by our method was 3-5 times lower compared to state-of-the-art techniques [10,11]. Figure 8 shows some kidney fibrosis segmentation results.

Whole Slide Analysis
Since arteriosclerosis and fibrosis are generally assessed on whole slide images (WSIs), we extended our strategy to entire biopsies using a sliding window approach. To evaluate the degree of arterial sclerosis and fibrosis, an expert pathologist takes at least 20 min per patient, while the RENFAST algorithm is able to process the entire WSI in about 2 min. Figure 9 illustrates the results obtained using our algorithm on two different kidney biopsies stained with PAS (vessel detection) and TRIC (fibrosis segmentation). The introduction of an automatic algorithm within the clinical workflow can speed up the diagnostic process and provide more accurate data to assess kidney transplantability. Figure 8. Visual performance comparison between previously published papers for fibrosis detection and the RENFAST algorithm. The fibrosis mask is superimposed on the original image, while the tissue contour is highlighted in orange.

Whole Slide Analysis
Since arteriosclerosis and fibrosis are generally assessed on whole slide images (WSIs), we extended our strategy to entire biopsies using a sliding window approach. To evaluate the degree of arterial sclerosis and fibrosis, an expert pathologist takes at least 20 min per patient, while the RENFAST algorithm is able to process the entire WSI in about 2 min. Figure 9 illustrates the results obtained using our algorithm on two different kidney biopsies stained with PAS (vessel detection) and TRIC (fibrosis segmentation). The introduction of an automatic algorithm within the clinical workflow can speed up the diagnostic process and provide more accurate data to assess kidney transplantability. our strategy to entire biopsies using a sliding window approach. To evaluate the degree of arterial sclerosis and fibrosis, an expert pathologist takes at least 20 min per patient, while the RENFAST algorithm is able to process the entire WSI in about 2 min. Figure 9 illustrates the results obtained using our algorithm on two different kidney biopsies stained with PAS (vessel detection) and TRIC (fibrosis segmentation). The introduction of an automatic algorithm within the clinical workflow can speed up the diagnostic process and provide more accurate data to assess kidney transplantability.

Discussion and Conclusions
Advances in transplant patient management are steadily increasing with improved clinical data and outcomes, requiring proportional development of the technical procedures routinely applied. However, the histopathological evaluation of preimplantation donor kidney biopsies has not varied, despite the increasing demand for pathology reports.
In this study, we present a fast and accurate method for the segmentation of kidney blood vessels and fibrosis in histological images. The detection of vascular structures and interstitial fibrosis is a real challenge due to the stain variability that affects the PAS and TRIC images, combined with high variation in the shape, size, and internal architecture of the renal structures. Thanks to the stain normalization step, our approach is capable of automatically detecting fibrotic areas and blood vessels in images with different staining intensity. The proposed algorithm was developed and tested on 350 PAS images for blood vessel segmentation and on 300 TRIC stained images for the detection of renal fibrosis. The results were compared with both manual annotations and previously published methods [8,10,11].
In blood vessel detection, the RENFAST algorithm achieved the best Bal ACCURACY , recall, and F1 SCORE compared to other techniques. More importantly, our strategy obtained the best DSC and HD95 in the segmentation of vessel boundaries (Table 3). This is fundamental as accurate segmentation of the blood vessel borders is mandatory for the correct evaluation of vascular damage. This high performance is mainly due to the combination of CNN segmentation with ad hoc post-processing specifically designed to detect the contour of each blood vessel. By segmenting lumen regions and cell nuclei, the RENFAST algorithm manages to delete almost all the false-positive shapes detected by the CNN. Our strategy is also capable of segmenting small blood vessels and correctly separating touching structures (Figure 7).
On TRIC stained images, the RENFAST algorithm allows us to quantify the interstitial fibrosis. The proposed approach showed high accuracy in segmenting fibrotic tissue and outperformed all the previously published methods (Table 4). Compared with the current state-of-the-art techniques, our method obtained the lowest absolute error (around 2.4%) in the estimation of fibrosis percentage. consecutive windows. The deep network is applied to each 512 × 512 window, and only the center of each prediction is kept for the creation of the initial softmax. This operation yields a heat map of size 768 × 768 which is further center cropped to obtain the final softmax with the same size as the input image. The final softmax can be considered as an RGB image, where the red layer contains the probability for each pixel of belonging to the "blood vessel" class, while the green layer represents the probability for each pixel of belonging to the "blood vessel boundaries" class. Figure A1. Procedure for the creation of the final CNN softmax. The original image is mirrored around the boundaries to obtain the extended image. Then, a sliding window approach is employed to classify each patch, and only the center of each prediction is kept to build the final softmax.

Appendix B
The semi-automatic pipeline used to generate the manual annotation of fibrotic areas was developed in Fiji [20]. Fiji is a Java-based software product with several plugins that facilitate medical image analysis. The proposed pipeline consists of seven steps: (i) image loading; (ii) manual definition of a ROI (region of interest) for each of the three colors (white, green, red); (iii) RGB color averaging of each ROI to obtain the three stain vectors; (iv) color deconvolution using the stain vectors previously found; (v) manual thresholding on the green channel; (vi) small particle removal; and (vii) complementation of the binary mask.