Improving Computer-Aided Cervical Cells Classiﬁcation Using Transfer Learning Based Snapshot Ensemble

: Cervical cells classification is a crucial component of computer-aided cervical cancer detection. Fine-grained classification is of great clinical importance when guiding clinical decisions on the diagnoses and treatment, which remains very challenging. Recently, convolutional neural networks (CNN) provide a novel way to classify cervical cells by using automatically learned features. Although the ensemble of CNN models can increase model diversity and potentially boost the classification accuracy, it is a multi-step process, as several CNN models need to be trained respectively and then be selected for ensemble. On the other hand, due to the small training samples, the advantages of powerful CNN models may not be effectively leveraged. In order to address such a challenging issue, this paper proposes a transfer learning based snapshot ensemble (TLSE) method by integrating snapshot ensemble learning with transfer learning in a unified and coordinated way. Snapshot ensemble provides ensemble benefits within a single model training procedure, while transfer learning focuses on the small sample problem in cervical cells classification. Furthermore, a new training strategy is proposed for guaranteeing the combination. The TLSE method is evaluated on a pap-smear dataset called Herlev dataset and is proved to have some superiorities over the exiting methods. It demonstrates that TLSE can improve the accuracy in an ensemble manner with only one single training process for the small sample in fine-grained cervical cells classification.


Introduction
Cervical cancer continues to be one of the prevalent cancers affecting women worldwide [1]. The disease is the most common cancer among women in 39 countries, and is the leading cause of cancer dearth in women in 45 countries [2]. The disease affects predominantly women in lower-resource countries, almost 70% of the global burden occurs in areas with low or medium levels of human development [2]. Manually screening abnormal cells from a pap-smear image has been a widely accepted method for prevention management as well as early detection of cervical cancer, especially in the developing countries [3]. However, manual assessment has many drawbacks in terms of being labor intensive, tedious, and time-consuming [3]. More importantly, the complex nature of cervical cell images presents significant challenges for manual screen analysis, the diagnosis results heavily rely on the experience of the technicians. Hence, the automation of cervical cells classification is essential for the development of a computer-aided classification system with low cost, adequate speed, and high accuracy [4], so that researchers and doctors can be released from the boring and repeated routine work. On the other hand, the computer-aided system can reduce the bias and provide robust results. This method has been used in some application areas, such as aerial scene classification [37] and fault diagnosis [38], but has not been tested or used for cervical cells classification. Furthermore, a disadvantage of snapshot ensemble is that the CNN model needs to be trained from scratch, which means that adequate amounts of raw samples are needed to ensure the training procedure. As for the cervical cells classification task, well-labeled data are limited because collecting high quality labeled data is very time consuming and costly. The small sample data may not provide enough information for training the whole model from scratch. On the other hand, transfer learning provides a solution to address this issue, which helps to eliminate the need of training the whole network all over again. The transfer learning method is widely adopted in several medical application areas, such as breast cancer diagnosis [39], epithelium-stroma classification [40], recognition of metastatic tissue in lymph node sections [41], lung cancer recognition [42], and cervical cells classification [4,32]. The proposed TLSE is meaningful to ensure the training of snapshot ensemble without the constraint of training the network with sufficient data from scratch. Furthermore, the advantages of the deep CNN model can also be leveraged into the cervical cells classification task, as the transfer learning phase makes the training easier.
The rest of the paper is organized as follows: a detailed description of the proposed transfer learning based snapshot ensemble (TLSE) is illustrated in Section 2. The experiment results conducted to evaluate the proposed method are reported in Section 3. The discussions and future work are presented in Section 4. The conclusion are summarized in Section 5.

Dataset and Pro-Precessing
The Herlev dataset [6] was collected at Herlev University Hospital, the department of Pathology and department of Automation on Technical University of Denmark. These cervical cells images were collected and manually annotated into 7 classes by skilled cyto-technicians. The Herlev dataset is consist of 917 cell images, and each of the cell images only contains one single nuclei. The distribution of the Herlev pap-smear dataset is illustrated in Table 1 and some cell examples are provided in Figure 1. The data pre-processing method presented in [4] is adopted in this paper for creating the training samples. To simplify the procedure, we only followed these steps: extract several image patches in size 128 × 128 by translating the centroid of the ground-truth nucleus mask. After that, in order to build a balanced training set, different rotation rates are set to abnormal cell images and normal ones, respectively. The rotation rate is set to 36 degree for individual abnormal cell images, and for each normal cell image, 18 degree is performed. These image patches are then resized to 224 × 224 × 3 for facilitating the transfer learning phase. Zero padding is adopted to void regions that lie outside of the image boundary. The data augmentation step is crucial to the training of the deep CNN models, Appl. Sci. 2020, 10, 7292 4 of 14 which improves the accuracy and convergence when training deep CNN models. Figure 2 shows the two sets of cervical cell image patches by cropping and rotation. The data pre-processing method presented in [4] is adopted in this paper for creating the training samples. To simplify the procedure, we only followed these steps: extract several image patches in size 128 × 128 by translating the centroid of the ground-truth nucleus mask. After that, in order to build a balanced training set, different rotation rates are set to abnormal cell images and normal ones, respectively. The rotation rate is set to 36 degree for individual abnormal cell images, and for each normal cell image, 18 degree is performed. These image patches are then resized to 224 × 224 × 3 for facilitating the transfer learning phase. Zero padding is adopted to void regions that lie outside of the image boundary. The data augmentation step is crucial to the training of the deep CNN models, which improves the accuracy and convergence when training deep CNN models. Figure 2 shows the two sets of cervical cell image patches by cropping and rotation.

Transfer Learning Based Snapshot Ensemble Method (TLSE)
In this paper, a snapshot ensemble combined with transfer learning approach (TLSE) is proposed for the 7-classes cervical cells classification task. Different from the traditional ensemble methods that combined several CNN models together to get the ensemble benefits, the proposed method can obtain a comparable ensemble result within a single model training procedure. Furthermore, in order to address the small sample issue as well as explore the full capacity of deep CNN model, transfer learning is conducted ahead of snapshot ensemble. This is a new attempt to get ensemble benefits from deep convolutional neural network in medical application areas. This method  The data pre-processing method presented in [4] is adopted in this paper for creating the training samples. To simplify the procedure, we only followed these steps: extract several image patches in size 128 × 128 by translating the centroid of the ground-truth nucleus mask. After that, in order to build a balanced training set, different rotation rates are set to abnormal cell images and normal ones, respectively. The rotation rate is set to 36 degree for individual abnormal cell images, and for each normal cell image, 18 degree is performed. These image patches are then resized to 224 × 224 × 3 for facilitating the transfer learning phase. Zero padding is adopted to void regions that lie outside of the image boundary. The data augmentation step is crucial to the training of the deep CNN models, which improves the accuracy and convergence when training deep CNN models. Figure 2 shows the two sets of cervical cell image patches by cropping and rotation.

Transfer Learning Based Snapshot Ensemble Method (TLSE)
In this paper, a snapshot ensemble combined with transfer learning approach (TLSE) is proposed for the 7-classes cervical cells classification task. Different from the traditional ensemble methods that combined several CNN models together to get the ensemble benefits, the proposed method can obtain a comparable ensemble result within a single model training procedure. Furthermore, in order to address the small sample issue as well as explore the full capacity of deep CNN model, transfer learning is conducted ahead of snapshot ensemble. This is a new attempt to get ensemble benefits from deep convolutional neural network in medical application areas. This method

Transfer Learning Based Snapshot Ensemble Method (TLSE)
In this paper, a snapshot ensemble combined with transfer learning approach (TLSE) is proposed for the 7-classes cervical cells classification task. Different from the traditional ensemble methods that combined several CNN models together to get the ensemble benefits, the proposed method can obtain a comparable ensemble result within a single model training procedure. Furthermore, in order to address the small sample issue as well as explore the full capacity of deep CNN model, transfer learning is conducted ahead of snapshot ensemble. This is a new attempt to get ensemble benefits from deep convolutional neural network in medical application areas. This method will expend the application field of snapshot ensemble in cervical cells classification, as well as bring some new exploration on transfer learning by integrate it with an ensemble method. Furthermore, a new training strategy is proposed in order to guarantee the integration of these two methods. To sum up, the proposed approach integrates the transfer learning with snapshot ensemble based on deep CNN for the fine-grained cervical cells classification task.
This section describes the details of the transfer learning based snapshot ensemble (TLSE) method. An overview is reported in Figure 3. some new exploration on transfer learning by integrate it with an ensemble method. Furthermore, a new training strategy is proposed in order to guarantee the integration of these two methods. To sum up, the proposed approach integrates the transfer learning with snapshot ensemble based on deep CNN for the fine-grained cervical cells classification task.
This section describes the details of the transfer learning based snapshot ensemble (TLSE) method. An overview is reported in Figure 3. The transfer learning phase is conducted to fine-tune the pre-trained model towards the target dataset, while the snapshot ensemble phase is designed for getting several snapshot models within one single model training process. During the testing process, these saved snapshot models are combined together by averaging their soft-max outputs.

Transfer Learning Phase for TLSE
In our study, a CNN model is adapted as the base model, which is pre-trained on the ImageNet dataset (ILSVRC2012) [43]. The last fully connected layers of the downloaded model are excluded. Instead of using several data augmentation techniques to reduce the overfitting problem, the modification of model architecture is designed by adding regularization to the model. On top of the base model, three fully connected layers with 512, 128, and 7 nodes are conducted. The dropout [44] is adopted in the first two fully connected layers, which consists of setting the output of each hidden neuron to zero with a probability of 0.5. This technique forces CNN to learn robust features. The ReLu [45] activation is applied in the first two fully connected layers. The weights are initialized with hestyle [46], which allows extremely deep CNN models to converge, especially when taking ReLu into account. Batch Normalization [47] is set before the activation layer to provide activation with the stable distribution, which can accelerate deep network training by reducing internal covariate shift. These modifications of model architecture allow the network to generalize to the new data distribution easily.
The training technique for the transfer learning phase in this TLSE method is that: firstly, freeze the pre-trained ImageNet's weight and remove the fully connected layers; secondly, add regularization to the model by making some specific refinements; at last, fine-tuning the whole CNN model using a small learning rate on the target dataset. The transfer learning phase is conducted to fine-tune the pre-trained model towards the target dataset, while the snapshot ensemble phase is designed for getting several snapshot models within one single model training process. During the testing process, these saved snapshot models are combined together by averaging their soft-max outputs.

Transfer Learning Phase for TLSE
In our study, a CNN model is adapted as the base model, which is pre-trained on the ImageNet dataset (ILSVRC2012) [43]. The last fully connected layers of the downloaded model are excluded. Instead of using several data augmentation techniques to reduce the overfitting problem, the modification of model architecture is designed by adding regularization to the model. On top of the base model, three fully connected layers with 512, 128, and 7 nodes are conducted. The dropout [44] is adopted in the first two fully connected layers, which consists of setting the output of each hidden neuron to zero with a probability of 0.5. This technique forces CNN to learn robust features. The ReLu [45] activation is applied in the first two fully connected layers. The weights are initialized with he-style [46], which allows extremely deep CNN models to converge, especially when taking ReLu into account. Batch Normalization [47] is set before the activation layer to provide activation with the stable distribution, which can accelerate deep network training by reducing internal covariate shift. These modifications of model architecture allow the network to generalize to the new data distribution easily.
The training technique for the transfer learning phase in this TLSE method is that: firstly, freeze the pre-trained ImageNet's weight and remove the fully connected layers; secondly, add regularization to the model by making some specific refinements; at last, fine-tuning the whole CNN model using a small learning rate on the target dataset.

Snapshot Ensemble Phase for TLSE
Current practices indicate that combining the outputs of different models can achieve a better accuracy [48]. The outputs of several CNN models are combined by averaging their soft-max class posteriors. This improves the accuracy due to the complementarity and diversity of different CNN models. However, training multiple deep networks with different architecture or initial weights for model averaging is computationally expensive and time-consuming since it needs to train several CNNs, respectively. To remedy this time-wasting and extensive workload problem, snapshot ensemble was brought forward, which can obtain the similar goal of combining networks without additional training cost. The snapshot ensemble phase for TLSE that is basically the same as the original snapshot ensemble. The snapshot ensemble develops an ensemble of accurate and diverse models within a single model training process. By using a cyclic learning rate schedule [49], snapshot ensemble enables the network to visit several local minima through the optimization process, and several model snapshots are saved at those local various minima.
The snapshot ensemble divides the whole training epochs into M cycles, and the learning rate is forced to decline at a very fast pace, for the sake of enabling the model to reach a local minimum. Thus, at every end of the cycles, a snapshot of the model weights is taken and saved. After that, the learning rate is raised for the start of the next training cycle. After M training cycles, M model snapshots are obtained, which will be used for the final ensemble. It is important to emphasize that the total training time is the same as a standard schedule. The ensemble prediction at the testing process is the average of the last two models' soft-max outputs.

The Training Strategy for TLSE
A novel training strategy is proposed for TLSE. Because there was a barrier in the combination of transfer learning and snapshot ensemble, these two methods cannot be combined directly. For the training strategy of transfer learning, the learning rate needs to be small for fine-tuning the model towards the target dataset. However, for the training strategy of snapshot ensemble, a big learning rate is required for helping the model to escape from the local minimum for the next cycle. This small-or-large learning rate issue makes model hard to train. In order to eliminate this obstacle, a new training strategy is conduced. Figure 4 illustrates the details.
accuracy [48]. The outputs of several CNN models are combined by averaging their soft-max class posteriors. This improves the accuracy due to the complementarity and diversity of different CNN models. However, training multiple deep networks with different architecture or initial weights for model averaging is computationally expensive and time-consuming since it needs to train several CNNs, respectively. To remedy this time-wasting and extensive workload problem, snapshot ensemble was brought forward, which can obtain the similar goal of combining networks without additional training cost.
The snapshot ensemble phase for TLSE that is basically the same as the original snapshot ensemble. The snapshot ensemble develops an ensemble of accurate and diverse models within a single model training process. By using a cyclic learning rate schedule [49], snapshot ensemble enables the network to visit several local minima through the optimization process, and several model snapshots are saved at those local various minima.
The snapshot ensemble divides the whole training epochs into M cycles, and the learning rate is forced to decline at a very fast pace, for the sake of enabling the model to reach a local minimum. Thus, at every end of the cycles, a snapshot of the model weights is taken and saved. After that, the learning rate is raised for the start of the next training cycle. After M training cycles, M model snapshots are obtained, which will be used for the final ensemble. It is important to emphasize that the total training time is the same as a standard schedule. The ensemble prediction at the testing process is the average of the last two models' soft-max outputs.

The Training Strategy for TLSE
A novel training strategy is proposed for TLSE. Because there was a barrier in the combination of transfer learning and snapshot ensemble, these two methods cannot be combined directly. For the training strategy of transfer learning, the learning rate needs to be small for fine-tuning the model towards the target dataset. However, for the training strategy of snapshot ensemble, a big learning rate is required for helping the model to escape from the local minimum for the next cycle. This smallor-large learning rate issue makes model hard to train. In order to eliminate this obstacle, a new training strategy is conduced. Figure 4 illustrates the details.  The main differences between the original training strategy in snapshot ensemble and our proposed method are in the following aspects: Firstly, transfer leaning is conducted in front of snapshot ensemble. A pre-trained model with several modifications is taken as the base model for snapshot ensemble. Secondly, two different learning rates are designed within one single optimization process, which are the initial learning rate and restart learning rate. The initial learning rate is been used for fine-tuning the pre-trained model towards the target dataset, while the restart learning rate is adopted to provoking the model and dislodge it from the minimum with the intension of restart training the model with a better initialization. The initial learning rate is smaller than the restart learning rate. The smaller initial learning rate helps the pre-trained model adapted to the target dataset, while the large restart learning rate provides energy for the model to escape from a critical point. The two learning rates decline at a very fast pace with the strategy described in [49], which ensures the model to converge after only a few epochs. The very first cycle adopt the initial learning rate, while remain cycles use the restart learning rate. When the training of M cycles end, multiple convergences along with several well behaved local minimum at the end of every training cycle. At last, the outputs of the last two saved snapshot models are averaged as the final prediction.

Training and Testing Protocols
A pre-trained model has been downloaded and been used as a base model to train the neural network for cervical cells classification on the Herlev dataset. The proposed TLSE is conducted for fine-tuning the downloaded model towards the Herlev dataset as well as getting several snapshot models within the single model training process. During the testing process, the last two snapshots are integrated by averaging their soft-max outputs.
In the setting of the proposed approach, Stochastic Gradient Descent and Cross-entropy are used as the optimizer and loss function. The initial learning rate is set to 0.0001, and the restart learning rate is set to 0.2. The cyclic cosine annealing schedule is adopted to make the model converge to multiple local minima. The learning rate is updated at every iteration. The batch size is set to 32. The implementation is based on Keras platform, and the experiments are conducted on Ubuntu 16.04 with a single NVIDIA TITAN Xp GPU.

Results
In this study, several experiments were designed in order to show the effectiveness of the proposed approach.

The Classification Results of TLSE Method and Transfer Learning Method Based On Different CNN Models
The first experiment examined the impact of different CNN models on the TLSE method by comparing the classification results. Three different CNN architectures are chosen to be base models: VGG model [50], ResNet-18 [51], and Inception-ResNet [52] model. The architectures of these three pre-trained CNN models are shown in Figures 5-7, respectively. The aim of this experiment was to show the adaptability of the TLSE, whether it can be adopted in the training of different CNN models. The classification results of TLSE method based on these three CNN models are summarized in Table 2.
As shown in the Table 2, the result of Inception-ResNet model reaches the highest classification rate than other models. The VGG model and ResNet model get comparable results.
Secondly, to demonstrate the ensemble efficiency of the TLSE, a comparison experiment between TLSE method and the transfer learning method was designed.
The TLSE method utilizes transfer learning method and snapshot ensemble learning method together towards the Herlev dataset, while the transfer learning is adopted solely on the target dataset. The data augmentation techniques, model architecture refinements and the modification of last fully connected layers are the same for both TLSE based and transfer learning based methods. The data augmentation techniques taken in this experiment are crop and rotation. The VGG model, Inception-ResNet model and ResNet-18 model were adopted as the pre-trained models for both TLSE and transfer learning.
The comparison results of TLSE method and transfer learning method are reported in Table 2. By comparing the results with the TLSE method. There is a significant difference between the two groups, that TLSE based method achieves higher accuracies than the transfer learning method, suggesting that snapshot ensemble is crucial to achieve better representation. Table 2 also shows that the increments of accuracy are gained from all three models, which illustrates the robustness of the TLSE. The most striking result to merge from the data is that the TLSE method based on the Inception-ResNet model yield an accuracy of 65.56% on the pap-smear Herlev dataset, outperforming the single transfer learning method using the same base model.

The Classification Results of Different Architecture Refinements
The aim of this experiment was to analyze the effect of architecture refinements in the last two fully connected layers when the proposed TLSE method is adopted. We constructed some different architectures to show the effect of the dropout rate and batch normalization regularization of the CNN model. We tested the classification rate with or without some model architecture refinements, which are the dropout layer and the batch normalization layer. Table 3 summarizes the comparison results.

Comparison with other Methods
In order to show the effectiveness of TLSE, we selected three other previous deep learning based methods for comparison. There are two previous methods [6,32] reported the seven-class accuracy, three studies reported the overall error, and the methods in [4,32] are based on convolutional neural networks as well as transfer leaning. The results are provided in Table 4. From the table, it can be seen that the TLSE achieves an accuracy of 65.56% and outperforms the other two methods (61.1% and 64.8%). The overall error of TLSE is higher than the DeepPap 1.6% [4], but lower than the benchmark 7.9% [6], the fine-grained CNN 7.7% [32], and Gen-wknn 3.9% [13]. These results suggest that the proposed TLSE method have some advantages in gaining better representation power, which results in a higher classification rate.

Discussion
The first set of analyses examined the robustness of the TLSE method on different CNN models by comparing the classification results. As shown in the Table 2, all three models get comparable results. It is evident from the table that the TLSE method works well with different CNN architectures on the Herlev dataset. The classification rate of Inception-ResNet model is higher than the other two models. The improvement of accuracy on the Inception-ResNet model is due to the availability of deeper CNN architecture, which helps the model to learn patterns that have more diversity and better representation power. Table 2 also presents the transfer learning method with the same CNN models. By comparing the results with the TLSE method. There is a significant difference between these two methods, which is that the TLSE based method achieves higher accuracies. Turning now to the experimental evidence on proving that the addition of snapshot ensemble in TLSE has a positive influence on the fine-grained cervical cells classification task. Furthermore, these results also provide important insights into the robustness of the proposed TLSE method, as Table 2 illustrates that the increments of accuracies are gained from all three models. The most striking result to merge from the data is that the TLSE method based on the Inception-ResNet model yield an accuracy of 65.56% on the pap-smear Herlev dataset, outperforming the single transfer learning method using the same base model.
The positive influence of the TLSE to the final classification rate: get ensemble result through a single optimization process, add diversity and generalization to the CNN model.
The original snapshot ensemble is trained on a benchmark dataset cifar10, and several data augmentation techniques are adopted to increase the amount of data for providing sufficient information for training deep CNN models. Nevertheless, in the medical image application area, the well-labelled data are not enough for training CNN model from scratch, and the training of CNN models will fall is a high probability event because of the over-fitting problem.
Fine-tuning the pre-trained CNN towards the target dataset can provide model with a strong ability of robustness and generalization. The pre-trained model with the fine-tuning phase can solve the problems of bad training approximation and pool generalization. The experiment results show that the TLSE method is effective and the established model offers a better accuracy and good capability of generalization.
The results obtained from Table 3 reveals that the model architecture refinements based on Inception-ResNet also improve classification accuracy. These minor refinements have a positive impact on the final classification accuracy. As expected, the combination of these techniques achieves a better accuracy compared to the non-modification architecture model, these architecture refinements help in capturing the high-level structural information, yielding better discriminability of different kinds of cervical cells. Further analyses reveal that the modification of the model architecture is effective and offers the model a better capability of generalization.
As reported in Table 4, the TLSE method reaches the highest accuracy on the Herlev datasets on the fine-grained classification task. On one hand, the snapshot ensemble phase in TLSE acquires an ensemble benefit by averaging the last two snapshot models' soft-max class posteriors. The ensemble result can be beneficial as it is a process of consulting several experts before taking a final decision. On the other hand, the transfer learning phase in TLSE provides better initialization weights rather than train from scratch, which provides the model with a good start point and makes the loss converge fast. In addition, the transfer learning plays a major role in reducing over-fitting, especially for small samples, as in the Herlev dataset case.
Despite of the accuracy improvement from TLSE, the proposed approach demonstrates a few limitations. Firstly, the size of input samples varies, so the images need to be resized into a fixed 128 × 128 for data processing, and zero padding is adopted for cropping. However, the cropped region may not contain the whole cell, and a fixed scale may not be appropriate when the cell image scales vary. Our ongoing study shows that this may be solved by using the spatial pyramid-pooling layer, for input images of arbitrary scales. Secondly, the final accuracy of the seven-class cervical cells classification is still not satisfied. The big gap between the two-class accuracy and the seven-class accuracy, as to our knowledge, is mainly due to the small sample. For the two-class problem, the amount of the data with data augmentation is sufficient. However, for the seven-class problem, the traditional data augmentation method may not be enough. As for the rare data especially in the medical image application areas, data augmentation methods, such as mix-up, GAN can be taken into consideration for increasing the number of samples. Thirdly, our approach is not based on microscopy slide images but rather on database images; thus, an important task related to image acquisition from the microscope is missing. The detection of cervical cells on microscopy slide images is another challenge task in this area. We anticipate that an end-to-end, highly accurate, and real-time cervical cells detection and classification system of this type is promising for the development of automation-assisted reading systems for cervical screening.

Conclusions
In this paper, a transfer learning based snapshot ensemble method called TLSE is proposed for fine-grained cervical cells classification task. The proposed TLSE approach integrates transfer learning with the snapshot ensemble in a coordinated way, a new training strategy has been proposed for this combination. Preponderances can be acquired from both snapshot ensemble and transfer learning. The proposed approach is able to gain better classification results in an ensemble manner using only one single training procedure, as well as address the small sample issue and prevent over-fitting during training the procedure. Since there is no need to train snapshot ensemble models from scratch, it can benefit the application areas of the snapshot ensemble method in image classification tasks, especially when it comes to small samples. Furthermore, the representation power of the deep CNN model can be leveraged into the cervical cell classification task efficiently. Finally, some modifications are made to the architecture of the last two fully connected layers of the pre-trained model in order to add regularization to the CNN model. The TLSE method yields a comparably higher accuracy, compared to existing methods. However, more effort should be devoted to explore this fine-grained classification task for the clinical usage of automation-assisted reading systems for cervical screening.
Author Contributions: W.C. designed the algorithm, performed the experiments, and wrote the paper. X.L., L.G. and W.S. supervised the research. W.S. guided the paper writing. All authors have read and agreed to the published version of the manuscript.