Next Article in Journal
Towards Scalable and Privacy-Enhanced On-Street Parking Management: A Roadmap for Future Inquiry
Next Article in Special Issue
Digital Restoration and 3D Virtual Space Display of Hakka Cardigan Based on Optimization of Numerical Algorithm
Previous Article in Journal
Numerical Feature Selection and Hyperbolic Tangent Feature Scaling in Machine Learning-Based Detection of Anomalies in the Computer Network Behavior
Previous Article in Special Issue
Software System for Automatic Grading of Paper Tests
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Level Hessian-Based Image Processing with the Frangi Neuron

AGH University of Krakow, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, Department of Applied Computer Science, al. A. Mickiewicza 30, 30-059 Krakow, Poland
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(19), 4159; https://doi.org/10.3390/electronics12194159
Submission received: 4 September 2023 / Revised: 21 September 2023 / Accepted: 5 October 2023 / Published: 7 October 2023
(This article belongs to the Special Issue Recent Advances in Computer Vision: Technologies and Applications)

Abstract

:
The Frangi neuron proposed in this work is a complex element that allows high-level Hessian-based image processing. Its adaptive parameters (weights) can be trained using a minimum number of training data. In our experiment, we showed that just one image is enough to optimize the values of the weights. An intuitive application of the Frangi neuron is to use it in image segmentation process. In order to test the performance of the Frangi neuron, we used diverse medical datasets on which second-order structures are visualized. The Frangi network presented in this paper trained on a single image proved to be significantly more effective than the U-net trained on the same dataset. For the datasets tested, the network performed better as measured by area under the curve receiver operating characteristic (ROC AUC) than U-net and the Frangi algorithm. However, the Frangi network performed several times faster than the non-GPU implementation of Frangi. There is nothing to prevent the Frangi neuron from being used as part of any other network as a component to process two-dimensional images, for example, to detect certain second-order features in them.

1. Introduction

Neural networks that utilize convolutional and MaxPooling layers dominate modern multi-scale digital image processing. It is an effect of high approximation capabilities of such architectures and relative simplicity. In order to prepare the network to start solving a particular problem, it is required to gather a sufficiently large training set due to which the network parameters can be trained. In some problems, gathering a sufficiently large dataset is difficult and expensive, and sometimes even impossible. This is especially true for medical data, for example, where even a large medical unit may not have abundant data for a certain disease. Also, data augmentation may not be sufficient if the collected data are not sufficiently representative. It should not be forgotten that before the era of deep neural networks, there were many effective mathematical models that could be used effectively for signal processing. They require a much smaller number of adaptive parameters to be specified in order to operate effectively than is the case with convolutional neural networks. The trade-off is expense of computational complexity and sophistication of these algorithms. On the other hand, with the use of modern computational frameworks, many complex algorithms can be implemented as elements of neural networks and then used as integral parts of the structure of neural networks that learn along with the overall architecture of the network. Vessel enhancement filters (aka vesselness) have been part of angiographic image processing for many years [1]. However, the application of these methods is not limited to blood vessels. Successful applications of such filters have also been reported in detecting facial wrinkles [2] or saliva ferning prediction [3]. Blood vessel filtering uses a variety of methods. Among the most popular are deep neural networks [4], particularly U-network-based architectures [5], directional convolutional kernels [6], various approaches to region growing [7] and applied mathematical morphology [8]. In recent years, there have been several proposals to improve the response of hessian-based methods by, for example, using swarm optimization [9], genetic programming and other approaches [10,11,12,13,14,15,16,17]. In this work, we propose so-called Frangi neurons that allow filtering second-order structures from digital images and for optimizing parameters of which a minimal training set consisting of even a single image is required. We show that the use of a Frangi neuron in a properly constructed neural network facilitates better performance in the task of segmenting second-order structures than the state-of-the-art solution U-net [18] and allows training the model using only one training image. Frangi filtering [19] is a commonly used method, most often in situations where tubular or linear structures such as blood vessels need to be segmented. So-called Vesselness Filters [20] are used in liver [21], retinal [22], brain [23] finger [24], face wrinkles [25], and fabric defect [26] imaging. Frangi filtering is used effectively in optoacoustic (photoacoustic) imaging [13], computed tomography [27], magnetic resonance [23] or simple digital photography [25]. In the literature, methods have been proposed to optimize the parameters of the Frangi filter; for example, work [28] uses an ordered logit model for this purpose. In our work, these parameters are optimized while training the neural network and no additional optimization procedure is needed.
The construction of a neural network inspired by Frangi’s algorithm is described in paper [29]. In practice, it is a convolutional network in which layers are initialized with parameters so that they compute the Hessian of the input image. Those parameters are optimized during training. In total, there are nine kernels and six parameters to update during training. In our approach, there is no restriction to the number of parallel Frangi blocks, and the Hessian calculation is fixed. In this way, we obtain a significantly larger filter response to second-order structures with different diameters. Furthermore, in this work, we define a single Frangi neuron that can stand alone as a component of any neural network and not just one particular architecture.
Our proposed method, like one-shot semantic segmentation networks, allows tuning the parameters to adapt to a new class of images. However, unlike the models proposed, for example, in works [30,31], the parameters of the segmentation networks are not based on convolutional layers, but on a ready-made hessian-based model. This results in a significant reduction in network tuning time at the expense of reducing the flexibility of the segmentation model.
The hessian-based second-order structure detection algorithm is, in fact, a well-known approach. An important novelty that we propose in this paper is to enable the use of this technique in the form of a dedicated neural layer. The recent developments in the design of new types of deep learning architectures have made it possible to directly optimize the parameters of such filters in a deep learning framework. Lately, other layers using efficient signal processing algorithms were defined, such as, for example, Zernike layers [32,33,34] or wavelet transform layers [35]. The theoretical principles of these methods are well known, as in the case of Frangi’s filter. Creating neural network layers based on them allowed us their effective use in deep learning models. We hope that a publication showing the effectiveness and possible applications of the second-order Hessian approach encapsulated in the neural layer will be useful to the scientific community.

2. Materials and Methods

In this section, we present the Frangi neuron mathematical model and the datasets on which we test its effectiveness in practice.

2.1. Frangi Filter

One of the most influential methods on multiscale tubular structure filtering has been described by Frangi et al. in [19]. This method uses the analysis of the second partial derivative of an image (Hessian) to detect local structures of this image. In the remainder of this paper, we consider the two-dimensional case. The Frangi filter follows Algorithm 1.
Algorithm 1: Frangi filtering algorithm.
Electronics 12 04159 i001
Calculation of eigenvalues from a symmetric matrix of size 2 × 2 can be performed according to Equation (1).
λ 1 = H x x + H y y + ( H x x H y y ) 2 + 4 X x y 2 2 λ 2 = H x x + H y y ( H x x H y y ) 2 + 4 X x y 2 2 .
The computations performed in a loop of Algorithm 1 that uses different values of s i g m a from the set s i g m a are independent of each other and can be performed in parallel. Only the last operation R = m a x ( R , V I s i g m a ) aggregates the partial results via element-wise (pixel-wise) maximum. This fact can be used to build a parallel neural network architecture inspired by the performance of Frangi’s original algorithm.

2.2. Frangi Neuron

Let us define a Frangi neuron as an indivisible block of operations that takes a tensor of size (n,m) as its input and returns a tensor of identical size. The operations performed by the Frangi neuron are identical to the operations performed in the individual loop iterations of Algorithm 1. Each loop iteration of Algorithm 1 uses three parameters: σ , β , and γ . In the case of the Frangi algorithm, these are invariant and are defined before the filter is run. However, we can optimize them to achieve the desired effect by minimizing the network’s loss function. Each Frangi neuron can thus independently have three parameters subject to optimization: σ , β , and γ . We now discuss the practicality of optimizing each parameter in the neural network learning process given a potential implementation of the Frangi neuron.
Parameter σ is a parameter of the Gaussian kernel. Its value affects the diameter of the structures detected by the filter. In practice, the value of σ determines the window size of the Gaussian filter k d i m , for example according to Equation (2) (see source codes of [36]).
k d i m = 2 4 σ + 0.5 + 1 ,
where ⌊ ⌋ is the rounding down operator. This fact causes, in the case of processing data in a single instruction multiple data pipeline, parameter σ to become troublesome because the implementation of the convolution filter, e.g., in Tensorflow, requires a predefined filter window size. For this reason, the Frangi neuron architecture proposed in this work has a fixed value of σ that is not subject to optimization. Parameters β and γ are threshold values for blob-like structures and the second-order structureness ratio. They are easy to optimize because changing them does not change the structure inside the neuron. The values of β and γ must be different from zero and there is no point in considering their negative values. Since they are arguments of the denominator of the exponent of the exponential function, there is a certain range of values in which they affect the performance of the filter. For this reason, we impose constraints in the optimization process: 0.05 β < 10 and 0.05 γ < 50 . Outside this range of values, the Frangi algorithm may become stuck in the local minimum. The Frangi neuron diagram is shown in Figure 1.

2.3. Example Application: Network Using Frangi Neurons for Image Segmentation (Frangi Network)

An intuitive application of the Frangi neuron is to use it in an architecture for the detection of longitudinal structures of varying diameters. We present our proposed network architecture of this type in Figure 2. This network consists of m Frangi neurons, each initialized with different σ values according the Equation (3). The range of sigma values depends on the range of diameters of the structures we want to detect. In practice, in the remainder of this paper, we used the set of σ values defined according to equation
σ = [ 1 , 1.25 , 1.5 , 1.75 , 2 , 2.25 , 2.5 , 2.75 , 3 , 3.25 , 3.5 , 3.75 , 4 , 5 , 6 , 7 , 8 , 9 , 10 ] ,
which proved to be very effective for the datasets tested. Frangi neurons are shown as green circles in Figure 2.
In Algorithm 1, each loop result with a given σ has an identical weight. In our approach, we added a single block after each Frangi neuron to scale the neuron’s response. We called it a scaling neuron. Its operation is to multiply the incoming signal by value I in the interval I [ 0 , 1 ] . The value of I is subject to optimization during network training. These blocks are shown in Figure 2 as yellow circles. The next step is to aggregate the results of the individual filters by computing the pixel wise maximum and scaling the results to the interval [ 0 , 1 ] . Values close to 0 indicate no detection of second-order structures. Values close to 1 indicate a high probability that second-order structures are present. The network can be trained using the stochastic gradient descent method of, for example, Adam [37]. The Frangi neuron implements complex functionality and hence needs a very small training set to optimize its parameters. Also, the Frangi network can be practically trained using only one (!) training image. In our implementation, we used an augmentation of the training set inspired by the augmentation used in the U-net [18]. We used random cropping, random brightness, contrast and saturation modification, random rotation, shearing, horizontal and vertical flipping, shifting and zooming. We used binary crossentropy as the loss function. DICE, IoU as well as binary crossentropy [38] are used to train the neural network used for segmentation. Modern scientific literature suggests that crsossentrop-based loss functions should be used first as a target function, which is universal for various segmentation problems, especially for medical images [38,39].

2.4. Datasets

In order to test the performance of a Frangi neuron, we used diverse medical datasets on which second-order structures are visualized. The following datasets were used in other research work and the license allows us their use in our work. We did not perform any research on humans to gather these data. Data do not contain personally identifiable information or offensive content.

2.4.1. Retinopathy (Retina)

First dataset that we used is a retinopathy database [40] that was gathered to enable comparative studies on segmentation of blood vessels in retinal images. That dataset consists of 40 images in total. Overall, 20 images are in the training group, while the remaining 20 images are in the test group. Both the training and test groups have reference manual segmentations and binary masks that indicate only the region containing the eye ball. Only the masked area is evaluated during segmentation quality testing. The images and their manual segmentations are made publicly available at https://drive.grand-challenge.org/ (Access date: 4 September 2023).

2.4.2. X-ray Coronary Artery Angiography (Coronary Artery)

The second dataset contains four X-ray coronary angiography images [41]. The images show blood vessels of various sizes. A high level of background noise as well as the blood vessels themselves are evident. A manual segmentation is provided for each image. Since there is no distinction between training and test set, training on this set is performed by selecting its subset to be used in training. In terms of image quality as well as diameter variation, this dataset is much more difficult to segment than the retina one.

2.4.3. Brain MRA (Brain)

The last dataset contains five images of brain magnetic resonance angiography (MRA) [42]. Images depict vessels of varying thickness. MRA reconstructions show different axial slices of human brain. Manual segmentation is provided for each image. Since there is no separation between training and test set, training on this set is performed by selecting a subset of it to be used for training.

3. Results

We implemented our solution using Python 3.8, tensorflow 2.8.0, keras 2.8.0, tensoflow-addons 0.16.1 (for Gaussian smoothing). The source code of the proposed algorithm can be downloaded from the GitHub repository https://github.com/browarsoftware/FrangiNeuron (Access date: 4 September 2023). We tested the performance of the Frangi neuron in the Frangi network (see Section 2.3) by comparing its performance to the Frangi filter and the U-net. We used the Frangi filter implementation from the scikit-image package 0.19.2. For the Frangi filter, we set σ values according to Equation (4).
Σ = [ 1 , 1.25 , 1.5 , 1.75 , 2 , 2.25 , 2.5 , 2.75 , 3 , 3.25 , 3.5 , 3.75 , 4 , 5 , 6 , 7 , 8 , 9 , 10 ] .
Weobtained the U-net implementation from the GitHub repository, https://github.com/jocicmarko/ultrasound-nerve-segmentation (Access date: 4 September 2023). This source code is available under the MIT licence. Both the Frangi network and the U-net were trained on a single image in the training set at 200 epochs. We repeated the training on every single image in the set, and then the remaining images were used to test the results. For training purposes, we used the Adam optimizer [37,43] with a learning rate equal to 10 2 and a batch size equal to 16. In all cases, only one image was used for training, and the rest wete used for validation (one-shot learning). We applied cross-validation, so after the Frangi network and the U-net were trained and validated, the network weights were randomly initialized, and another image was selected for training and the rest of the dataset was selected for validation. We performed 20 trainings and validations for the retina dataset, 4 trainings and validations for the artery dataset and 5 trainings and validations for the brain dataset. The quality of segmentation was evaluated using the Area Under the Receiver Operating Characteristic Curve (ROC AUC) measure, which is an appropriate metric for evaluating this type of task. We used the scikit-learn 1.0.2 package for this. Our implementation and the data used are presented in the https://github.com/jocicmarko/ultrasound-nerve-segmentation (accessed on 4 September 2023). The calculations were performed on Intel Core i7 3.00 GHz CPU, 64 GB RAM, NVIDIA GeForce RTX 2060 GPU, Windows 10 OS.
The results of the segmentation performance of each method can be seen in Figure 3, Figure 4 and Figure 5 and in Table 1, Table 2, Table 3 and Table 4. Values in the tables are averaged. The last row of Table 1 contains the average execution time of the algorithm on the retina set given in seconds. We calculated Sensitivity, Specificity, Intersection of Union (IoU/Jaccard), Volumetric Overlap Error (VOE) [44] and F1-score/Dice to make the results more comprehensive. We found those metrics most appropriate. Other popular scorings like Average Symmetric Surface Distance (ASSD) that measures the average of all distances for every point from one object to the other and vice versa are not suitable for tubular structure segmentation but rather for regions or volumes segmentation [45,46].

4. Discussion

Both the results obtained in Table 1, Table 2, Table 3 and Table 4 and the visualizations in Figure 3, Figure 4 and Figure 5 prove that the Frangi neuron fulfills its role as a detector of second-order structures. Both AUC ROC and other evaluated metrics consistently show that the proposed network based on Frangi neurons performs better, and, in borderline cases, generally no worse than the stationary Frangi filter. On the other hand, the qualitative advantage of the obtained segmentations over the U-net architecture is strongly evident. This is not due to the weakness of this universal and efficient architecture like U-net, which is a kind of state-of-the-art solution for segmentation tasks. It is simply due to the fact that model parameter fitting is performed on the basis of a one-element learning set. It can be said that the proposed network based on Frangi neurons can be classified among one-shot segmentation approaches. This reveals the great flexibility of the proposed architecture and may be its advantage over universal models such as U-net in some specific applications. The results on the retina set are much worse than, for example, in [47], because the U-net was trained on only one image (one-shot learning) and not on the full 20-element training set. The limitation on the size of the training set was intended to show the limitations of the models purely based on convolutional filters (U-net-like architectures). The Frangi filter performances on all datasets in not poor as it might be expected. As can be seen in Table 2 and Table 3, specificity, IoU and F1-score factors are quite similar to those of the Frangi network. In the case of the Frangi filter, however, the sensitivity value is lower, which means that the filter without the optimization procedure performs worse in identifying true positive values. On the other hand, the fact is that the Frangi filter on the tested datasets performs better than the U-net if this deep neural network is not trained on a sufficiently large dataset. A single image in the training set is enough for the Frangi network using the Frangi neuron to optimize its weights to detect the relevant image fragments. Compared to the Frangi filter alone, the results returned by the network have much stronger contrast. Frangi filtering has the disadvantage of highlighting the boundaries of an area of solid colour as the location of second-order structures. This is clearly visible in the retina set (see Figure 3). This phenomenon cannot be eliminated in a network architecture using only Frangi neurons because it is a feature of a Hessian-based approach. If the filtered area does not have such clear boundaries, this effect does not occur as can be seen, for example, in the coronary artery dataset (see Figure 5). In Figure 3 and Figure 4, the proposed Frangi neuron model might have bias for segmenting the contour. This is typical behaviour of any method that uses various types of edge detectors. In practice, if an image is known to have a globally definable edge separating the area of interest from the background, it is eliminated from the segmentation process by applying an image mask. Such an edge removal procedure was also used in [40]. Thus, this area does not affect the physician or clinician’s decision, since its removal is a well-known practice. For the datasets we considered, a single image was insufficient for the U-net to produce generalizations of the segmentation process despite the fact that it uses image augmentation analagous to that of the Frangi network. Trained on a single image, the U-net either removes too many parts of the image or leaves fractions of the image that do not contain the desired structures. The ROC AUC of the Frangi network has the largest (best) values of all tested methods for all datasets. The computation execution time on the U-net is about 10 % faster than on the Frangi net. The Frangi net performs more than seven times faster than the Frangi filter using the same Σ set. We did not optimize the speed of the Frangi neuron because we used the available standard Tensorflow and Keras functions in its implementation, so we assume that its performance can be improved. In summary, the Frangi neuron met the expectations and is an effective component of the network performing segmentation of various types of second-order structures. In most advanced deep neural one-shot network models for image segmentation, the authors use pretrained models on large and comprehensive image datasets [48,49]. This is an important limitation because the most popular image databases such as imagenet [48] and COCO [50] do not contain medical modalities. In contrast, solutions based on Hessian matrix analysis are immediately ready to detect tubular structures. In medical practice, we often have a very limited number of medical images presenting the modality of interest to physicians, on which a particular biological structure can be seen. In such case, even transfer learning is problematic due to the high complexity of the deep learning model and the shortage of training data. The remedy might be the use of image augmentation, which we also applied in this work for the U-net.

5. Conclusions

The neuron proposed in this work is a complex element that allows high-level Hessian-based image processing. Its adaptive parameters (weights) can be trained using a minimal amount of training data. In our experiment, we showed that just one image is enough to optimize the values of the weights. An intuitive application of the Frangi neuron is to use it in image segmentation process. The Frangi network presented in this paper trained on a single image proved to be much more effective than the U-net trained on the same dataset. For the tested datasets, the network achieved the best results as measured by ROC AUC as the Frangi algorithm runs, however, several times faster than the non-GPU implementation of Frangi. The Frangi network implementation prepared for this work uses a massive GPU parallelized algorithm. For this reason, it runs several times faster than the non-GPU implementation of the Frangi filter. The U-net, on the other hand, runs at a speed on the GPU comparable to that of the proposed Frangi network. Ultimately, the goal should be to have every algorithm that performs image processing implemented multithreaded, if it can be accelerated in this way. There is nothing preventing the Frangi neuron from being used as a part of any other network as a processing element for two-dimensional images, for example, to detect some second-order features in them. We proposed a method for the detection of second-order features.
Our proposed method has some limitations caused by the implementation. It would be beneficial to implement the ability to optimize the values in the Σ set. This would allow us the use of a much smaller set of Σ values which would speed up the computation. We can manually overcome such problems by re-implementing Tensorflow’s two-dimensional Gaussian filtering. Tensorflow’s built-in Gaussian filtering requires the float value sigma, which cannot be a Tensorflow variable. For this reason, sigma is not directly optimized in the GPU processing pipeline. However, we decided that Tensorflow’s current built-in implementation of Gaussian filtering is sufficient to demonstrate the effectiveness of the proposed method. The second limitation is that the Frangi neuron is less flexible compared to a typical convolutional architecture. The Frangi neuron is designed to detect certain second-order structures and cannot cope with other types of scenarios. On the other hand, it is an excellent alternative to convolutional networks, when we have a small number of training data pieces, which is very common for medical data, where gathering and manual segmentation of the training set is a difficult (due to small number of available cases) and expensive task. Frangi neuron is an image processing method that is dedicated to the segmentation of tubular structures. The most intuitive application is the detection of various types of bio structures. In practice, a wide variety of blood vessels are often analyzed [51,52,53], including those that form tree-like structures (for example, lungs) [54]. Tubular structures also form the intestine [55,56] and trachea [57,58]. We hope that the Frangi neuron will find its application in the above-mentioned areas.

Author Contributions

Conceptualization: T.H. and M.P.; methodology: T.H; software: T.H. and M.P.; validation: T.H. and M.P.; formal analysis: T.H.; investigation, T.H., M.P.; data curation: T.H.; writing—original draft preparation, T.H. and M.P.; writing—review and editing: T.H. and M.P.; visualization: T.H., funding acquisition, T.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Source codes can be downloaded from: https://github.com/browarsoftware/FrangiNeuron (accessed on 4 September 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lamy, J.; Merveille, O.; Kerautret, B.; Passat, N. A Benchmark Framework for Multi-Region Analysis of Vesselness Filters. IEEE Trans. Med. Imaging 2022, 41, 3649–3662. [Google Scholar] [CrossRef] [PubMed]
  2. Kim, S.; Yoon, H.; Lee, J.; Yoo, S. Semi-automatic Labeling and Training Strategy for Deep Learning-based Facial Wrinkle Detection. In Proceedings of the 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS), Shenzhen, China, 21–23 July 2022; pp. 383–388. [Google Scholar] [CrossRef]
  3. Covaciu, A.G.; Florea, C.; Szolga, L.A. Microscopic Images Analysis for Saliva Ferning Prediction. In Proceedings of the 2020 International Symposium on Fundamentals of Electrical Engineering (ISFEE), Bucharest, Romania, 5–7 November 2020; pp. 1–5. [Google Scholar] [CrossRef]
  4. Yin, P.; Cai, H.; Wu, Q. DF-Net: Deep fusion network for multi-source vessel segmentation. Inf. Fusion 2022, 78, 199–208. [Google Scholar] [CrossRef]
  5. Ali, O.; Muhammad, N.; Jadoon, Z.; Kazmi, B.M.; Muzamil, N.; Mahmood, Z. A Comparative Study of Automatic Vessel Segmentation Algorithms. In Proceedings of the 2020 3rd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 29–30 January 2020; pp. 1–6. [Google Scholar] [CrossRef]
  6. Zamani, M.; Salkim, E.; Saeed, S.R.; Demosthenous, A. A Fast and Reliable Three-Dimensional Centerline Tracing: Application to Virtual Cochlear Implant Surgery. IEEE Access 2020, 8, 167757–167766. [Google Scholar] [CrossRef]
  7. Rodrigues, E.O.; Conci, A.; Liatsis, P. ELEMENT: Multi-Modal Retinal Vessel Segmentation Based on a Coupled Region Growing and Machine Learning Approach. IEEE J. Biomed. Health Inform. 2020, 24, 3507–3519. [Google Scholar] [CrossRef] [PubMed]
  8. Chakour, E.; Mrad, Y.; Mansouri, A.; Elloumi, Y.; Bedoui, M.; Andaloussi, I.; Ahaitouf, A. Blood vessel segmentation of retinal fundus images using dynamic preprocessing and mathematical morphology. In Proceedings of the 2022 8th International Conference on Control, Decision and Information Technologies (CoDIT), Istanbul, Turkey, 17–20 May 2022; Volume 1, pp. 1473–1478. [Google Scholar] [CrossRef]
  9. Mahapatra, S.; Agrawal, S.; Mishro, P.K.; Pachori, R.B. A novel framework for retinal vessel segmentation using optimal improved frangi filter and adaptive weighted spatial FCM. Comput. Biol. Med. 2022, 147, 105770. [Google Scholar] [CrossRef]
  10. Usman, I.; Almejalli, K.A. Intelligent Automated Detection of Microaneurysms in Fundus Images Using Feature-Set Tuning. IEEE Access 2020, 8, 65187–65196. [Google Scholar] [CrossRef]
  11. Rodrigues, E.O.; Rodrigues, L.O.; Machado, J.H.P.; Casanova, D.; Teixeira, M.; Oliva, J.T.; Bernardes, G.; Liatsis, P. Local-Sensitive Connectivity Filter (LS-CF): A Post-Processing Unsupervised Improvement of the Frangi, Hessian and Vesselness Filters for Multimodal Vessel Segmentation. J. Imaging 2022, 8, 291. [Google Scholar] [CrossRef]
  12. Tian, F.; Li, Y.; Wang, J.; Chen, W. Blood Vessel Segmentation of Fundus Retinal Images Based on Improved Frangi and Mathematical Morphology. Comput. Math. Methods Med. 2021, 2021, 4761517. [Google Scholar] [CrossRef]
  13. Longo, A.; Morscher, S.; Najafababdi, J.M.; Jüstel, D.; Zakian, C.; Ntziachristos, V. Assessment of hessian-based Frangi vesselness filter in optoacoustic imaging. Photoacoustics 2020, 20, 100200. [Google Scholar] [CrossRef]
  14. Yang, J.; Huang, M.; Fu, J.; Lou, C.; Feng, C. Frangi based multi-scale level sets for retinal vascular segmentation. Comput. Methods Programs Biomed. 2020, 197, 105752. [Google Scholar] [CrossRef]
  15. Challoob, M.; Gao, Y.; Busch, A.; Nikzad, M. Separable Paravector Orientation Tensors for Enhancing Retinal Vessels. IEEE Trans. Med. Imaging 2022, 42, 880–893. [Google Scholar] [CrossRef] [PubMed]
  16. Yang, B.; Chen, W.; Luo, H.; Tan, Y.; Liu, M.; Wang, Y. Neuron image segmentation via learning deep features and enhancing weak neuronal structures. IEEE J. Biomed. Health Inform. 2020, 25, 1634–1645. [Google Scholar] [CrossRef] [PubMed]
  17. Ghavami, S.; Bayat, M.; Fatemi, M.; Alizad, A. Quantification of morphological features in non-contrast-enhanced ultrasound microvasculature imaging. IEEE Access 2020, 8, 18925–18937. [Google Scholar] [CrossRef] [PubMed]
  18. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  19. Frangi, A.F.; Niessen, W.J.; Vincken, K.L.; Viergever, M.A. Multiscale vessel enhancement filtering. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI’98, Cambridge, MA, USA, 11–13 October 1998; Wells, W.M., Colchester, A., Delp, S., Eds.; Springer: Berlin/Heidelberg, Germany, 1998; pp. 130–137. [Google Scholar]
  20. Bukenya, F.; Bai, L.; Kiweewa, A. A Review of Blood Vessel Segmentation Techniques. In Proceedings of the 2018 1st International Conference on Computer Applications Information Security (ICCAIS), Riyadh, Saudi Arabia, 4–6 April 2018; pp. 1–10. [Google Scholar] [CrossRef]
  21. Lamy, J.; Merveille, O.; Kerautret, B.; Passat, N.; Vacavant, A. Vesselness Filters: A Survey with Benchmarks Applied to Liver Imaging. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 3528–3535. [Google Scholar] [CrossRef]
  22. Khattak, D.; Khaliq, A.; Jalil, A.; Shahid, M. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising. PLoS ONE 2018, 13, e0203418. [Google Scholar] [CrossRef]
  23. Jiang, J.; Wang, D.; Song, Y.; Sachdev, P.S.; Wen, W. Computer-aided extraction of select MRI markers of cerebral small vessel disease: A systematic review. NeuroImage 2022, 261, 119528. [Google Scholar] [CrossRef]
  24. Tran, N.C.; Wang, J.C. A Survey of Finger Vein Recognition. In Proceedings of the 2021 9th International Conference on Orange Technology (ICOT), Tainan, Taiwan, 16–17 December 2021; pp. 1–5. [Google Scholar] [CrossRef]
  25. Ng, C.C.; Yap, M.H.; Costen, N.; Li, B. Automatic Wrinkle Detection Using Hybrid Hessian Filter. In Proceedings of the Computer Vision—ACCV 2014, Singapore, 1–5 November 2014; Cremers, D., Reid, I., Saito, H., Yang, M.H., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 609–622. [Google Scholar]
  26. Kang, X.; Zhang, E. A universal defect detection approach for various types of fabrics based on the Elo-rating algorithm of the integral image. Text. Res. J. 2019, 89, 004051751984063. [Google Scholar] [CrossRef]
  27. Survarachakan, S.; Pelanis, E.; Khan, Z.A.; Kumar, R.P.; Edwin, B.; Lindseth, F. Effects of Enhancement on Deep Learning Based Hepatic Vessel Segmentation. Electronics 2021, 10, 1165. [Google Scholar] [CrossRef]
  28. Ballerini, L.; Lovreglio, R.; del C. Valdés Hernández, M.; Gonzalez-Castro, V.; Maniega, S.M.; Pellegrini, E.; Bastin, M.E.; Deary, I.J.; Wardlaw, J.M. Application of the Ordered Logit Model to Optimising Frangi Filter Parameters for Segmentation of Perivascular Spaces. Procedia Comput. Sci. 2016, 90, 61–67. [Google Scholar] [CrossRef]
  29. Fu, W.; Breininger, K.; Schaffert, R.; Ravikumar, N.; Würfl, T.; Fujimoto, J.; Moult, E.; Maier, A. Frangi-Net. In Proceedings of the Bildverarbeitung für die Medizin 2018; Maier, A., Deserno, T.M., Handels, H., Maier-Hein, K.H., Palm, C., Tolxdorff, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; pp. 341–346. [Google Scholar]
  30. Zhao, G.; Zhao, H. One-Shot Image Segmentation with U-Net. J. Phys. Conf. Ser. 2021, 1848, 012113. [Google Scholar] [CrossRef]
  31. Shaban, A.; Bansal, S.; Liu, Z.; Essa, I.; Boots, B. One-shot learning for semantic segmentation. arXiv 2017, arXiv:1709.03410. [Google Scholar]
  32. Sun, Z.; Rooke, E.; Charton, J.; He, Y.; Lu, J.; Baek, S. ZerNet: Convolutional Neural Networks on Arbitrary Surfaces Via Zernike Local Tangent Space Estimation. Comput. Graph. Forum 2020, 39, 204–216. [Google Scholar] [CrossRef]
  33. Theodoridis, T.; Loumponias, K.; Vretos, N.; Daras, P. Zernike Pooling: Generalizing Average Pooling Using Zernike Moments. IEEE Access 2021, 9, 121128–121136. [Google Scholar] [CrossRef]
  34. Wang, S.; Lv, L.T.; Yang, H.C.; Lu, D. Zernike-CNNs for image preprocessing and classification in printed register detection. Multimed. Tools Appl. 2021, 80, 32409–32421. [Google Scholar] [CrossRef]
  35. Liu, J.; Li, P.; Tang, X.; Li, J.; Chen, J. Research on improved convolutional wavelet neural network. Sci. Rep. 2021, 11, 17941. [Google Scholar] [CrossRef]
  36. Virtanen, P.; Gommers, R.; Oliphant, T.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef]
  37. Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the ICLR 2015. Ithaca, NY. arXiv 2015, arXiv:1412.6980. [Google Scholar]
  38. Ma, J.; Chen, J.; Ng, M.; Huang, R.; Li, Y.; Li, C.; Yang, X.; Martel, A.L. Loss odyssey in medical image segmentation. Med. Image Anal. 2021, 71, 102035. [Google Scholar] [CrossRef]
  39. Murugesan, B.; Liu, B.; Galdran, A.; Ayed, I.B.; Dolz, J. Calibrating segmentation networks with margin-based label smoothing. Med Image Anal. 2023, 87, 102826. [Google Scholar] [CrossRef]
  40. Staal, J.; Abramoff, M.; Niemeijer, M.; Viergever, M.; van Ginneken, B. Ridge based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  41. Sharma, S.; Bhattacharya, M.; Sharma, G. Optimized Coronary Artery Segmentation Using Frangi Filter and Anisotropic Diffusion Filtering. In Proceedings of the 2013 International Symposium on Computational and Business Intelligence, New Delhi, India, 24–26 August 2013; pp. 261–264. [Google Scholar] [CrossRef]
  42. Avadiappan, S.; Payabvash, S.; Morrison, M.A.; Jakary, A.; Hess, C.P.; Lupo, J.M. A Fully Automated Method for Segmenting Arteries and Quantifying Vessel Radii on Magnetic Resonance Angiography Images of Varying Projection Thickness. Front. Neurosci. 2020, 14, 537. [Google Scholar] [CrossRef]
  43. Bock, S.; Weiß, M. A proof of local convergence for the Adam optimizer. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  44. Jalalian, A.; Mashohor, S.; Mahmud, R.; Karasfi, B.; Iqbal Saripan, M.; Ramli, A.R. Computer-assisted diagnosis system for breast cancer in computed tomography laser mammography (CTLM). J. Digit. Imaging 2017, 30, 796–811. [Google Scholar] [CrossRef] [PubMed]
  45. Van Ginneken, B.; Heimann, T.; Styner, M. 3D segmentation in the clinic: A grand challenge. In Proceedings of the MICCAI Workshop on 3D Segmentation in the Clinic: A Grand Challenge, Brisbane, Australia, 29 October–2 November 2007; Volume 1, pp. 7–15. [Google Scholar]
  46. Yeghiazaryan, V.; Voiculescu, I. Family of boundary overlap metrics for the evaluation of medical image segmentation. J. Med. Imaging 2018, 5, 015006. [Google Scholar] [CrossRef] [PubMed]
  47. Meng, C.; Sun, K.; Guan, S.; Wang, Q.; Zong, R.; Liu, L. Multiscale dense convolutional neural network for DSA cerebrovascular segmentation. Neurocomputing 2020, 373, 123–134. [Google Scholar] [CrossRef]
  48. Lu, Y.; Zheng, K.; Li, W.; Wang, Y.; Harrison, A.P.; Lin, C.; Wang, S.; Xiao, J.; Lu, L.; Kuo, C.F.; et al. Contour transformer network for one-shot segmentation of anatomical structures. IEEE Trans. Med. Imaging 2020, 40, 2672–2684. [Google Scholar] [CrossRef] [PubMed]
  49. Zhao, A.; Balakrishnan, G.; Durand, F.; Guttag, J.V.; Dalca, A.V. Data augmentation using learned transformations for one-shot medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 8543–8553. [Google Scholar]
  50. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P. Microsoft COCO: Common Objects in Context. In Computer Vision—ECCV 2014. ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2014; Volume 8693. [Google Scholar] [CrossRef]
  51. Kirbas, C.; Quek, F. Vessel extraction techniques and algorithms: A survey. In Proceedings of the Third IEEE Symposium on Bioinformatics and Bioengineering, 2003, Bethesda, MD, USA, 12 March 2003; pp. 238–245. [Google Scholar] [CrossRef]
  52. Ciecholewski, M.; Kassjański, M. Computational Methods for Liver Vessel Segmentation in Medical Imaging: A Review. Sensors 2021, 21, 2027. [Google Scholar] [CrossRef]
  53. Kirbas, C.; Quek, F. A review of vessel extraction techniques and algorithms. ACM Comput. Surv. (CSUR) 2004, 36, 81–121. [Google Scholar] [CrossRef]
  54. Li, H.; Tang, Z.; Nan, Y.; Yang, G. Human treelike tubular structure segmentation: A comprehensive review and future perspectives. Comput. Biol. Med. 2022, 151, 106241. [Google Scholar] [CrossRef]
  55. Spuhler, C.; Harders, M.; Székely, G. Fast and robust extraction of centerlines in 3D tubular structures using a scattered-snakelet approach. In Proceedings of the Medical Imaging 2006 Conference, San Diego, CA, USA, 13–16 February 2006; Volume 6144, pp. 1295–1302. [Google Scholar]
  56. Wu, X.; Cui, M.; Gao, Y.; Sun, D.; Ma, H.; Zhang, E.; Xie, Y.; Zaki, N.; Qin, W. Tubular Structure-Aware Convolutional Neural Networks for Organ at Risks Segmentation in Cervical Cancer Radiotherapy. In Proceedings of the International Workshop on Computational Mathematics Modeling in Cancer Analysis, Singapore, 18 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 131–140. [Google Scholar]
  57. Smistad, E.; Elster, A.C.; Lindseth, F. GPU accelerated segmentation and centerline extraction of tubular structures from medical images. Int. J. Comput. Assist. Radiol. Surg. 2014, 9, 561–575. [Google Scholar] [CrossRef]
  58. Bauer, C.; Pock, T.; Bischof, H.; Beichel, R. Airway tree reconstruction based on tube detection. In Proceedings of the Second International Workshop on Pulmonary Image Analysis, London, UK, 20 September 2009; pp. 203–213. [Google Scholar]
Figure 1. Frangi neuron diagram.
Figure 1. Frangi neuron diagram.
Electronics 12 04159 g001
Figure 2. Franginetwork for image segmentation.
Figure 2. Franginetwork for image segmentation.
Electronics 12 04159 g002
Figure 3. Segmentation results on the retina dataset.
Figure 3. Segmentation results on the retina dataset.
Electronics 12 04159 g003
Figure 4. Segmentation results on the brain MRI dataset.
Figure 4. Segmentation results on the brain MRI dataset.
Electronics 12 04159 g004
Figure 5. Segmentation results on the X-ray coronary artery angiography dataset.
Figure 5. Segmentation results on the X-ray coronary artery angiography dataset.
Electronics 12 04159 g005
Table 1. ROC AUC value of test dataset calculation for various segmentation algorithms. The bottom row contains the average execution time of the algorithm on the retina set given in seconds.
Table 1. ROC AUC value of test dataset calculation for various segmentation algorithms. The bottom row contains the average execution time of the algorithm on the retina set given in seconds.
Frangi Network Frangi U-Net
Retina 0.8684 0.7883 0.7769
Coronary artery 0.9117 0.9110 0.7264
Brain 0.9397 0.9297 0.5720
Exec time [S] 0.160 1.187 0.143
Table 2. Coefficients assessing the quality of segmentation of test datasets calculated for the Frangi network.
Table 2. Coefficients assessing the quality of segmentation of test datasets calculated for the Frangi network.
Sensitivity Specificity IoU VOE F1-Score/Dice
Retina 0.6648 0.9557 0.4251 0.5749 0.5966
Coronary artery 0.9891 0.9143 0.4842 0.5158 0.6526
Brain 0.9242 0.9133 0.4768 0.5232 0.6451
Table 3. Coefficients assessing the quality of segmentation of test datasets calculated for the Frangi filter.
Table 3. Coefficients assessing the quality of segmentation of test datasets calculated for the Frangi filter.
Sensitivity Specificity IoU VOE F1-Score/Dice
Retina 0.6244 0.9441 0.3648 0.6352 0.5348
Coronary artery 0.9551 0.9144 0.4677 0.5323 0.6377
Brain 0.9139 0.9166 0.4804 0.5196 0.6487
Table 4. Coefficients assessing the quality of segmentation of test datasets calculated for the U-net network.
Table 4. Coefficients assessing the quality of segmentation of test datasets calculated for the U-net network.
Sensitivity Specificity IoU VOE F1-Score/Dice
Retina 0.6506 0.9152 0.3130 0.6870 0.4767
Coronary artery 0.8088 0.8242 0.2577 0.7423 0.4095
Brain 0.6457 0.6751 0.1430 0.8570 0.2506
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hachaj, T.; Piekarczyk, M. High-Level Hessian-Based Image Processing with the Frangi Neuron. Electronics 2023, 12, 4159. https://doi.org/10.3390/electronics12194159

AMA Style

Hachaj T, Piekarczyk M. High-Level Hessian-Based Image Processing with the Frangi Neuron. Electronics. 2023; 12(19):4159. https://doi.org/10.3390/electronics12194159

Chicago/Turabian Style

Hachaj, Tomasz, and Marcin Piekarczyk. 2023. "High-Level Hessian-Based Image Processing with the Frangi Neuron" Electronics 12, no. 19: 4159. https://doi.org/10.3390/electronics12194159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop