Deep Learning-Based Layer Identiﬁcation of 2D Nanomaterials

: Two-dimensional (2D) nanomaterials exhibit unique properties due to their low dimensionality, which has led to great potential for applications in biopharmaceuticals, aerospace, energy storage, mobile communications and other ﬁelds. Today, 2D nanomaterials are often prepared and exfoliated by a combination of mechanical and manual methods, which makes the production of 2D nanomaterials inefﬁcient and prevents standardized and industrialized manufacturing. Recent break-throughs in semantic segmentation techniques based on deep learning have enabled the accurate identiﬁcation and segmentation of atomic layers of 2D nanomaterials using optical microscopy. In this study, we analyzed in detail sixteen semantic segmentation models that perform well on public datasets and apply them to the layer identiﬁcation and segmentation of graphene and molybdenum disulﬁde. Furthermore, we improved the U 2 -Net † model to obtain the best overall performance, namely 2DU 2 -Net † . The accuracy of the 2DU 2 -Net † model was 99.03%, the kappa coefﬁcient was 95.72%, the dice coefﬁcient was 96.97%, and the average cross–merge ratio was 94.18%. Meanwhile, it also had good performance in terms of computation, number of parameters, inference speed and generalization ability. The results show that deep learning-based semantic segmentation methods can greatly improve efﬁciency and replace most manual operations, and different types of semantic segmentation methods can be adapted to different properties of 2D nanomaterials, thus promoting the research and application of 2D nanomaterials.

The physical, chemical, thermal, and electrical properties of 2D nanomaterials with different atomic layer thicknesses vary greatly, making both single-layer and few-layer 2D nanomaterials highly valuable for research, and the optical properties of 2D nanomaterials with different layer numbers often vary greatly [2].Traditionally, the number of layers in 2D nanomaterials is often measured using optical microscopy (OM) [20], atomic force microscopy (AFM) [21], or by optical methods such as Raman spectroscopy [22].However, these methods require a large number of researchers and are not only inefficient but also wasteful of resources.For many chemically unstable 2D nanomaterials, conventional methods require a large investment in personnel and equipment to be successful, such as Bi 2 Sr 2 CaCu 2 O 8+δ [23], which degrades in a short time.Therefore, it is important to find a more general and efficient method for the layer identification of 2D nanomaterials.
With the rapid development of deep learning techniques, many semantic segmentation network models based on deep learning techniques have emerged [24,25], which show excellent performance and powerful generalization ability and are capable of most semantic segmentation tasks.Accordingly, OM images of 2D nanomaterials are rich in physical and chemical information, and it is advantageous to use semantic segmentation models based on deep learning techniques to identify the number of atomic layers in OM images of 2D nanomaterials.At the same time, many methods have emerged in the field of materials science to explore the properties of materials through deep learning techniques [26], which have played a crucial role in promoting the development of this discipline.To this end, we carefully selected 16 semantic segmentation models that perform well on public datasets for identifying and segmenting the atomic layers of 2D nanomaterials, and trained the models on graphene and molybdenum disulfide (MoS 2 ) OM image data with pixel-level labeling.We found that the U 2 -Net † [27] model has a better performance in recognizing the layers of 2D nanomaterials and has the best performance in terms of computation, number of parameters, compatibility and training deployment.Therefore, based on the existing experiments, the network structure of the U 2 -Net † [27] model was adapted by multi-scale connectivity and the use of pyramidal pooling module to obtain a new model and denoted as 2DU 2 -Net † .Based on the characteristics of nanomaterials, this model is able to better identify two-dimensional nanomaterials with fine zeros, discrete and edge regions.
The use of a semantic segmentation model based on deep learning to identify the number of layers of 2D nanomaterials can save a lot of human and material resources, which is of great importance for the research of 2D nanomaterials.The contributions of this paper are summarized as follows: (1) Sixteen different types of semantic segmentation models were used to analyze their specific effects on 2D nanomaterial OM images.(2) The U 2 -Net † [27] model based on encoder-decoder architecture was found to have good performance and environmental adaptability without a backbone network and is suitable for various applications for detecting 2D nanomaterials.(3) We improved the model structure of U 2 -Net † [27] by means of multiscale connectivity and pyramidal pooling to obtain a 2DU 2 -Net † model that is more adaptable to twodimensional nanomaterial layer identification and segmentation.

Related Work
In recent years, many 2D nanomaterial layer recognition methods have been proposed, and deep learning-based semantic segmentation methods have shown greater advantages than traditional methods.
In the early days, when deep learning was not effectively used due to the limitations of hardware devices, machine learning was commonly used to tackle problems, and this was no exception in the field of 2D nanomaterials.Dezhen Xue et al. [28] accelerated the search of new materials with target properties through a self-designed machine learning framework.Mathew J. Cherukara et al. [29] accurately predicted the physical, chemical and mechanical properties of nanomaterials through machine learning.Yuhao Li et al. [30] used a machine learning approach to assist optical microscopy in identifying two-dimensional nanomaterials.Yu Mao et al. [31] used machine learning to analyze information related to single-layer continuous films in MoS2 Raman spectra.Wenbo Sun et al. [32] used machine learning to assist in the design and prediction of high-performance organic photovoltaic materials.Ya Zhuo et al. [33] efficiently predicted inorganic phosphorus using a machine Coatings 2022, 12, 1551 3 of 17 learning approach.These approaches using machine learning often require complex feature engineering to obtain experimental data [34], which are labor-intensive and not conducive to efficient 2D nanomaterial layer studies.
With the continuous development of deep learning techniques, the use of deep learning-based semantic segmentation models to identify the layers of 2D nanomaterials has become increasingly advanced.Bingnan Han et al. [35] identified and characterized 2D nanomaterials by designing a 2DMOINet model based on an encoder-decoder structure.Bin Wu et al. [36], on the other hand, identified 2D nanomaterials by improving the Seg-Net model [37].Satoru Masubuchi et al. [38] used a Mask-RCNN-based neural network model [39] combined with optical microscopy to automatically search for 2D nanomaterials.Li Zhu et al. [40] used an artificial neural network ANN [41] to identify and characterize 2D nanomaterials and van der Waals heterogeneous structures.Jaimyun Jung et al. [42] used a ResNet approach [43] for a structural and mechanical analysis of 2D nanomaterials by applying a super-resolution image (SR) to the 2D nanomaterials.Yashar Kiarashinejad et al. [44] used a dimensionality-reduction-based deep learning approach to design electromagnetic nanostructures.Sicheng Wu et al. [45] accelerated the discovery of twodimensional catalysts for hydrogen precipitation reactions by combining a graph convolutional neural network model (CGCNN) [46] with deep learning algorithms.These deep learning-based semantic segmentation models avoid the complex process of feature extraction by machine learning algorithms and have more outstanding capabilities in high-level abstract feature extraction.Deep learning-based semantic segmentation models can also be trained end-to-end, providing better adaptability and deployment capabilities.
With the development of deep learning, most of the problems previously solved by machine learning can be solved by deep learning methods.Semantic segmentation techniques based on deep learning have made great progress and greatly improved in terms of performance, generalization ability, and deployment.We will continue to work on more advanced image segmentation algorithms for 2D nanomaterial layer recognition and segmentation.

Materials and Methods
In order to better identify and segment 2D nanomaterials presented in OM images, the U 2 -Net † [27] network model is restructured to focus on identifying more discrete, small and edge-region 2D nanomaterials by combining contextual information.The next section will focus on the U 2 -Net † [27] model and the 2DU 2 -Net † model obtained by adapting a network structure and adding a pyramidal pooling module.Finally, the workflow for layer identification and the segmentation of 2D nanomaterials is explained.

Network Module Design
With the emergence of U-Net [47] and SegNet [37] network models, the encoderdecoder structure has become the mainstream of semantic segmentation models.Network models based on encoder-decoder structures are structurally stable, do not require a pretrained backbone network and are highly adaptable to complex environments, and have been widely used in the medical imaging field [63].U 2 -Net † [27] uses a two-layer nested U-shaped structure based on an encoder-decoder structure, and its top layer is a large U-shaped structure consisting of 11 phases, as shown in Figure 1.Each stage is populated by its residual U-block [27] (RSU) modified from the residual block [43], as shown in Figure 2a,b and Figure 3a.Inspired by U-Net3+ [64], a multilayer concatenation operation was performed on top of the RSU block in order to fuse multi-scale and contextual information, as shown in Figure 3b.
Coatings 2022, 12, x FOR PEER REVIEW 4 of 17 network structure and adding a pyramidal pooling module.Finally, the workflow for layer identification and the segmentation of 2D nanomaterials is explained.

Network Module Design
With the emergence of U-Net [47] and SegNet [37] network models, the encoderdecoder structure has become the mainstream of semantic segmentation models.Network models based on encoder-decoder structures are structurally stable, do not require a pretrained backbone network and are highly adaptable to complex environments, and have been widely used in the medical imaging field [63].U 2 -Net † [27] uses a two-layer nested U-shaped structure based on an encoder-decoder structure, and its top layer is a large Ushaped structure consisting of 11 phases, as shown in Figure 1.Each stage is populated by its residual U-block [27] (RSU) modified from the residual block [43], as shown in Figures 2a,b and 3a.Inspired by U-Net3+ [64], a multilayer concatenation operation was performed on top of the RSU block in order to fuse multi-scale and contextual information, as shown in Figure 3b.  (6side S (5) side S (4) side S (3) side S (2) side S (1) side Sup0   As can be seen in Figure 3a, in the decoder stage, each layer simply fuses the information from the encoder and the underlying decoder that is symmetrical to it.With the  As can be seen in Figure 3a, in the decoder stage, each layer simply fuses the information from the encoder and the underlying decoder that is symmetrical to it.With the improved multi-layer connection, as shown in Figure 3b, each layer in the encoder stage As can be seen in Figure 3a, in the decoder stage, each layer simply fuses the information from the encoder and the underlying decoder that is symmetrical to it.With the improved multi-layer connection, as shown in Figure 3b, each layer in the encoder stage needs to connect all layers above the corresponding encoder, except the first and last layer.This facilitates the acquisition of low-resolution feature map information that is located at the bottom layer.
Inspired by PSPNet [48], a pyramid pooling module was added to the encoder output of the first nested RSU block to fuse multi-scale and contextual information, as shown in Figures 4 and 5.
As can be seen from Figure 4, the final output of the encoder of the RSU block is scaled up to the same size as the output image using 1 × 1, 2 × 2, 3 × 3, 6 × 6 2D adaptive maximum pooling, and then up-sampling.Finally, feature fusion is achieved by concatenation and convolution operations.By inserting the pyramid pooling module, global information can be better supplemented, and its detailed segmentation capability can be improved by a small number of samples.As can be seen from Figure 4, the final output of the encoder of the RSU block is scaled up to the same size as the output image using 1 × 1, 2 × 2, 3 × 3, 6 × 6 2D adaptive maximum pooling, and then up-sampling.Finally, feature fusion is achieved by concatenation and convolution operations.By inserting the pyramid pooling module, global information can be better supplemented, and its detailed segmentation capability can be improved by a small number of samples.
Figure 5 shows the 2DU 2 -Net † model obtained by adjusting the network structure.Based on the design ideas of U-Net3+ [64] and PSPNet [48], multi-scale connectivity and pyramidal pooling models are used in the encoder and decoder stages.In the encoder stages, En_1, En_2, En_3, and En_4, and the corresponding decoder stages, the purple blocks in the middle part are connected in multiple layers, and the output of the first purple block passes through the pyramid pooling module represented by the yellow block.In the En_5, En_6 and De_5 stages, only the pyramid pooling module represented by the yellow block is used due to the small size of the output feature map.  (6side S (5) side S (4) side S (3) side S (2) side S (1) side Sup0

Loss Functions
The dataset used in the experiment is divided into three categories and labeled at the pixel level, where the single layer is red, the double layer is green, and the background is black, as shown in Figure 6.Therefore, we chose to use the cross-entropy loss function [65] to improve the segmentation accuracy of the categories, and its formula is expressed as follows:

Loss Functions
The dataset used in the experiment is divided into three categories and labeled at the pixel level, where the single layer is red, the double layer is green, and the background is black, as shown in Figure 6.Therefore, we chose to use the cross-entropy loss function [65] Coatings 2022, 12, 1551 8 of 17 to improve the segmentation accuracy of the categories, and its formula is expressed as follows: Coatings 2022, 12, x FOR PEER REVIEW In Equation ( 1), pic is the predicted probability that the observed samp category c, yic is the sign function, and if the true category of sample i is eq otherwise take 0. M is the number of label types, and N is the total number

Overall Flow of the Experiment
In the data collection and processing phase of 2D nanomaterials, a based target detection model [67] can be used to assist OM and AFM dev and detect 2D nanomaterials.After acquiring the data, different types of d created with the help of data tagging tools according to the type and chara nanomaterials.For 2D nanomaterials, where data acquisition and trainin data can be expanded by generative adversarial networks [68], data augm and semi-supervised [70] or weakly supervised [71] training.After colle cessing the data, a suitable network model needs to be selected according to of the 2D nanomaterials.For chemically stable 2D nanomaterials, a networ backbone network and a large number of convolutional layers can be chose fragmented and low-contrast 2D nanomaterials, a network model based o mechanism can be chosen [72].For 2D nanomaterials with an unstable chem demands on the experimental environment, a lightweight network model w performance can be chosen [25].In summary, for a wide variety of 2D nano complex structures and different properties, different types of deep lea models can be chosen to solve the problem.After training, the models ployed, and for different types of devices, suitable network models ca In Equation (1), p ic is the predicted probability that the observed sample i belongs to category c, y ic is the sign function, and if the true category of sample i is equal to c, take 1 otherwise take 0. M is the number of label types, and N is the total number of pixel points.

Overall Flow of the Experiment
In the data collection and processing phase of 2D nanomaterials, a deep learning-based target detection model [67] can be used to assist OM and AFM devices to identify and detect 2D nanomaterials.After acquiring the data, different types of data tags can be created with the help of data tagging tools according to the type and characteristics of 2D nanomaterials.For 2D nanomaterials, where data acquisition and training are difficult, data can be expanded by generative adversarial networks [68], data augmentation [69], and semi-supervised [70] or weakly supervised [71] training.After collecting and processing the data, a suitable network model needs to be selected according to the properties of the 2D nanomaterials.For chemically stable 2D nanomaterials, a network model with a backbone network and a large number of convolutional layers can be chosen.For discrete, fragmented and low-contrast 2D nanomaterials, a network model based on an attention mechanism can be chosen [72].For 2D nanomaterials with an unstable chemistry and high demands on the experimental environment, a lightweight network model with a real-time performance can be chosen [25].In summary, for a wide variety of 2D nanomaterials with complex structures and different properties, different types of deep learning network models can be chosen to solve the problem.After training, the models need to be deployed, and for different types of devices, suitable network models can be deployed through model compression [73], migration learning [74], etc.

Data Sets
The data used in this paper were obtained from the open source project by Yu Saito et al. [66].The acquired open-source data were redistributed as needed to accommodate for subsequent model training and improvement.According to Yu Saito et al. [66], graphene and MoS 2 were mechanically exfoliated onto SiO 2 /Si substrates, and graphene and MoS 2 images were acquired by OM from different angles, and the thickness and number of layers were determined and labeled at the pixel level using AFM and comparison methods.In this paper, 68 images were collected, including 33 of MoS 2 and 35 of graphene, as shown in Figure 6.The single layer is labeled in red, the double layer is labeled in green, and the background is labeled in black.To improve the learning accuracy of the network model and prevent overfitting, data enhancement was achieved by random cropping, flipping, rotating and distorting the original images.Finally, the labeled dataset was randomly divided into training and testing datasets in the ratio of 8:2, with 2000 training and 500 testing datasets.

Evaluation Indicators
The variety of 2D nanomaterials, the complexity of their preparation, the difficulty of their preservation, and the differences between their properties make it essential to select a comprehensive and reliable network model.At the same time, a network model for practical applications should not only consider its performance in terms of accuracy, but also its robustness, scalability and dependence on resources.To evaluate the quality of these realistic OM images and network models, six evaluation metrics were used [24,25,[75][76][77].
(2) Params [76] refers to how many parameters the model contains, which directly determines the size of the model and affects memory usage during model inference.
(3) Accuracy [24,25,75] is a metric used to evaluate classification models, i.e., the proportion of the total number of correct model predictions, and is given by the following formula: The confusion matrix shows that TP is the true case, TN is the true negative case, FP is the false positive case and FN is the false negative case.
(4) Kappa [77] is a metric used for a consistency test.Consistency refers to whether the model prediction results and the actual classification results are consistent, and can be used to measure the effectiveness of the classification with the following formula: where p o is the sum of the number of correctly classified samples in each category divided by the total number of samples, i.e., the overall classification accuracy.p e is the "sum of the products of the actual and predicted numbers" for each of the categories, divided by the "square of the total number of samples".
(5) Dice [24] is a set similarity measure function used to calculate the similarity of two samples, and it is often used to evaluate the goodness of segmentation algorithms.Its public expression is as follows: Coatings 2022, 12, 1551 10 of 17 where |A∩B| is the intersection between A and B, and the subtables |A| and |B| denote the number of elements of A and B. The factor of 2 in the numerator is due to the double counting of elements common to A and B in the denominator.( 6) MIoU [24,25,75] is a standard measure of semantic partitioning that calculates the average of the ratio of intersection and concatenation of all classes.Its public representation is as follows: where n ij indicates the number of pixels that are actually in category i but are predicted to be in category j, n ii indicates the number of pixels that are actually in category i but are also predicted to be in category i, and n c indicates the total number of categories.

Training Setup
In the training process, each OM image was resized to 512 × 512.ResNet [43], HR-Net [59] and STDC [56] were used as the backbone networks for the backbone model.During the pre-training of the network model, it was found that the loss function of the network converged at about 30 iterations, so the number of training iterations was set to 50, and the data are shown in Figure 7.The initial learning rate was set to 0.01, and the learning rate was decayed using the learning rate decay method.The optimization method uses SGD, with a batch size of 16 and random initialization, and the whole experiment takes about 100 h.A 64-bit Windows 11 operating system was used.The network was built, trained and tested based on PaddleSeg [61].Details of the configuration are as follows: Anaconda3, PaddlePaddle2.3.1,Paddleseg2.6.0,OpenCV4.1.1,Cuda10.2 and Cudnn7.6.
where nij indicates the number of pixels that are actually in category i but are be in category j, nii indicates the number of pixels that are actually in categ also predicted to be in category i, and nc indicates the total number of catego

Training Setup
In the training process, each OM image was resized to 512 × 512.ResNe [59] and STDC [56] were used as the backbone networks for the backbone m the pre-training of the network model, it was found that the loss function o converged at about 30 iterations, so the number of training iterations was the data are shown in Figure 7.The initial learning rate was set to 0.01, and rate was decayed using the learning rate decay method.The optimization SGD, with a batch size of 16 and random initialization, and the whole expe about 100 h.A 64-bit Windows 11 operating system was used.The netwo trained and tested based on PaddleSeg [61].Details of the configuration a Anaconda3, PaddlePaddle2.3.1,Paddleseg2.6.0,OpenCV4.1.1,Cuda10.2 and

Discussion and Analysis of Results
The performance metrics of GFOPs, Params, MIoU, Accuracy, Kappa and Dice for U 2 -Net † [27], 2DU 2 -Net † and 15 other network models are shown in Table 1.The GFOPs of U 2 -Net † [27] were 51.32, Params 4.36, MIoU 93.74%, Accuracy 99.03% Kappa 95.72%, and Dice 96.73%.U 2 -Net † [27] has the lowest computational and parametric counts, and the performance of U 2 -Net † [27] is very competitive with most models in terms of MIoU, Accuracy, Kappa and Dice.The 2DU 2 -Net † model outperforms the U 2 -Net † [27] model, and outperforms most other models in terms of image detail and edge processing, as shown in Figure 8. Models using backbone networks tend to have large GFOPs and params, especially for models, such as PSPNet [48] and DNLNet [52], which use ResNet [43] as the backbone.Lightweight networks, such as BiseNetv2 [57] and STDC-Seg [56], U 2 -Net † [27] and 2DU 2 -Net † , are more outstanding in terms of performance, reaching a lightweight level in terms of inference speed, GFOPs, and Params.8. From top to bottom, the first row shows the input image, the second row shows the labeled image, the third row shows the predicted image of 2DU 2 -Net † , followed by the predicted images of other network models, where the yellow box indicates the segmentation of the detail part, the blue box indicates the misclassification, and the purple box indicates the unsegmented part.After a comparison with the labelled images and the input images, it was found that all network models could accurately extract the color and location of the image for large peeled and contrasting areas.The 2DU 2 -Net † had the best segmentation results in terms of distinguishing between detailed, scattered and edge regions of 2D nanomaterials, with finer contour lines than the other models, and the most detailed segmentation of scattered regions without segmentation errors.The segmentation of U 2 -Net † [27] has shortcomings and errors at the edges, but is more competitive with the other models in terms of performance and results.In terms of segmentation refinement and correctness, the U-Net [47], PSPNet [48], PFPNNet [49], DNLNet [52], OCRNet [55], BiseNetv2 [57] and SFNet [60] models suffer from classification errors regarding very small details.In discrete regions with a low contrast, PSPNet [48], DeepLabV3+ [51], DNLNet [52], DANNet [53], ISANet [54], BiseNetv2 [57], HRNet [59], SFNet [60] and ANN [41] models have some fine areas that are not segmented.The DeepLabV3 [50], STDC-Seg [56] and FCN [58] models have minor shortcomings compared to 2DU 2 -Net † in terms of edge detail.Overall, the segmentation results of these models are far superior to those of the earlier deep learning-based segmentation models, with 2DU 2 -Net † outperforming the other models in terms of performance, practical results, and detail.It can be seen that 2DU 2 -Net † has a better ability to identify the number of layers in 2D nanomaterials.The experimental results show that the 2DU 2 -Net † and U 2 -Net † [27] models based on the encoder-decoder structure without the use of a non-backbone network have greater advantages in terms of computation, number of parameters, model performance, inference speed, robustness, and generalization ability.They are also effective in segmenting 2D nanomaterial images in practical tests and deployment.It is worth noting that the Ushaped structure-based model is widely used in medical devices [63,78], which is similar to the experimental setting of 2D nanomaterials.At the same time, other different types of network models show a clear trend in segmentation results compared to machine learningbased and previous neural network models, and their efficiency and refinement far exceed those of manual ones.

Conclusions
We carefully selected 16 deep learning-based semantic segmentation models that have recently been proposed and achieved good results on public datasets.These models were carefully tuned for optimal performance prior to experimentation, and we trained them on graphene and MoS 2 datasets.After quantitative and qualitative analysis, it was found that these models achieved results well above the artificial level, with the 2DU 2 -Net † and U 2 -Net † [27] models achieving the best overall performance.In the tests, the 2DU 2 -Net † model achieved 99.03% Accuracy, 95.72% Kappa, 96.97% Dice and 94.18% MIoU.The 2DU 2 -Net † and U 2 -Net † [27] models also performed better than other models in terms of computation, number of parameters, inference speed and generalization ability.The 2DU 2 -Net † is designed to better segment two-dimensional nanomaterials at edges and discrete regions.It is based on the U 2 -Net † [27] model and adjusts its network structure through multiscale connectivity, incorporating pyramidal pooling modules, which improve the segmentation performance over the U 2 -Net † [27] model and show a more outstanding capability to process detail compared to other models.
The results show that the semantic segmentation models based on deep learning are novel tools for the fast identification of layers of 2D nanomaterials, and that these trained semantic segmentation models can efficiently identify 2D nanomaterials other than those used for training, with a good generalization ability and high accuracy.Secondly, migration learning and model optimization methods are used to better distinguish single-layer, double-layer and multi-layer 2D nanomaterials for different types of 2D nanomaterials.Finally, the inference process of the deep learning-based model can be adapted to various devices and can be run on a remote server.
Next, we will search for more semantic segmentation models and train them on a wider range of 2D nanomaterials.At the same time, we will develop a segmentation toolkit for 2D nanomaterials that can be applied to a wider range of 2D nanomaterials research.Finally, this study will help to optimize the research process of 2D nanomaterials and open up new avenues for layer identification in 2D nanomaterials.

Figure 3 .
Figure 3. Internal structure of two residual U-blocks: (a) residual U-block (b) and multiscale connected residual U-block.

Figure 2 .Figure 2 .
Figure 2. Basic structure of the residual module (a) residual module (b) residual U block.

Figure 3 .
Figure 3. Internal structure of two residual U-blocks: (a) residual U-block (b) and multiscale connected residual U-block.

Figure 3 .
Figure 3. Internal structure of two residual U-blocks: (a) residual U-block (b) and multiscale connected residual U-block.

Figure 6 .
Figure 6.Experimental image [66].(a) original image; (b) label image (Single layer marker in red, double layer marker in green, background marker in black.).

Figure 7 .
Figure 7. Loss curve when the network is pre-trained.

Figure 7 .
Figure 7. Loss curve when the network is pre-trained.

Table 1 .
Network model test results.