Next Article in Journal
EEG Oscillatory Power and Complexity for Epileptic Seizure Detection
Next Article in Special Issue
NDE Characterization of Surface Defects on Piston Rods in Shock Absorbers Using Rayleigh Waves
Previous Article in Journal
Correction: Auer et al. Influence of Different Carbon Content on Reduction of Zinc Oxide via Metal Bath. Appl. Sci. 2022, 12, 664
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Classification of Ultrasonic Signal via a Convolutional Neural Network

1
School of Power and Mechanical Engineering, Wuhan University, Wuhan 430072, China
2
School of Materials and Engineering, Southeast University, Nanjing 211189, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(9), 4179; https://doi.org/10.3390/app12094179
Submission received: 18 March 2022 / Revised: 13 April 2022 / Accepted: 19 April 2022 / Published: 21 April 2022
(This article belongs to the Special Issue Recent Advances of Ultrasonic Testing in Materials)

Abstract

:
Ultrasonic signal classification in nondestructive testing is of great significance for the detection of defects. The current methods have mainly utilized low-level handcrafted features based on traditional signal processing approaches, such as the Fourier transform, wavelet transform and the like, to interpret the information carried by signals for classification. This paper proposes an automatic classification method via a convolutional neural network (CNN) which can automatically extract features from raw data to classify ultrasonic signals collected of a circumferential weld composed of austenitic and martensitic stainless steel with internal slots. Experiments demonstrate that our method outperforms the traditional classifier with manually extracted features, achieving an accuracy rate of classification up to 0.982. Furthermore, we visualize the shape, location and orientation of defects with a C-scan imaging process based on classification results, validating the effectiveness of the results.

1. Introduction

Ultrasonic testing is a versatile nondestructive testing (NDT) technique for the quality assessment and identification of defects inside a wide variety of materials, including metals, plastics, ceramics and composites since ultrasonic waves can propagate in these materials as a form of mechanical vibration [1]. Ultrasonic inspection has many excellent performances [2], such as its proficiency in flaw location and size specifications and its high sensitivity to damages. The most commonly used ultrasonic testing approach is the pulse-echo method for its simplicity, accuracy and efficiency. In this method, a piezoelectric transducer is used to generate ultrasonic waves propagating through the inspection object. Once a defect is encountered, parts of waves will be reflected and return back to the transducer. Such waves will be converted to electrical signals, also called A-scan signals, containing information about the location, type, size and orientation of the defect [3]. At present, the interpretation of ultrasonic signals is usually accomplished manually. The results are prone to be influenced by human factors heavily reliant on inspection personnel’s experience and knowledge. In addition, the process will be laborious and time-consuming as the collected data drastically increases. An automatic classification system interpreting ultrasonic signals accurately and consistently is becoming the urgent need of industries for minimizing errors caused by inspectors’ subjectivity [4].
In recent decades, the significant progress of artificial intelligence techniques makes it possible to automatically classify ultrasonic signals. Such techniques contain ultrasonic pattern recognition, neural networks, etc. Neural networks for ultrasonic signal classification have attracted much attention in the past and many researchers have applied them for classification. A. Masnata and M. Sunseri [5] developed an automatic recognition system for weld defects. The system consists of a three-layered neural network where the input values are a selection of the shape parameters obtained from the pulse-echo by Fischer analysis. Drai R et al. [6] extracted features for discrimination of detected echoes from the perspective of time domain, spectral domain and discrete wavelet representation. The compact feature vector obtained is then classified by different methods: the K nearest neighbor algorithm, the statistical Bayesian algorithm and artificial neural networks. Back-propagation neural networks are trained by the characteristic parameters extracted from ultrasonic signals to determine the type, location and length of cracks in the medium [7]. Cau et al. [8] developed a feed forward neural network along with wavelet blind separation to classify positions, width and depth of defects in not accessible pipes. Sahoo et al. [9] chose the peak amplitude, the energy of the signal and the time of the flight of ultrasonic echo signals as the feature indicators and developed a cascade feed forward back-propagation neural network model to estimate both crack size and crack location simultaneously. Two-dimensional information about different types of defects was collected via the wavelet transform and then used to improve automatic ultrasonic flaw detection and classification accuracies using the neural network [3]. Peng Yang et al. [10] presented automatic ultrasonic flaw signal classification models based on ANNs and support vector machines (SVMs) by using wavelet-transform-based strategies for feature extraction. Chen et al. [11] applied the wavelet packet transform to extract feature vectors representing defect qualities and then utilized that data to achieve the accurate classification of the welding defects an SVM-based radial basis function neural network.
Although the above-mentioned approaches have achieved good performance for automatic signal classification, it is necessary to choose and extract either time- or frequency-domain features from ultrasonic signals for training the neural network algorithm. However, this feature extraction is highly subjective and empirical, and in most cases, it is difficult to extract the reliable features that are most effective or relevant with the defects to be detected. In addition, it is not practical for industrial application since the workload of feature extraction for training neural networks increases dramatically under large amounts of original data. Therefore, it is very necessary to develop an automatic signal classification system which can automatically identify the most relevant features of ultrasonic signals with defects and eliminate the subjective selection.
Recently, because of the great strides in computing power, deep learning, which can extract and identify the effective features of images automatically, became a potential solution to deal with the above issue. In the field of deep learning, the convolutional neural network (CNN) has proved to be highly qualified for visual recognition tasks and is widely implemented [12]. A deep CNN with a linear SVM top layer was used to automatically extract features for each signal from wavelet coefficients and complete signal classification tasks for composite materials [13]. Guo et al. [14] converted laser ultrasonic signals into the scalograms (images) via the wavelet transform which were then used as the image input for the pretrained CNN to extract the defect features automatically to quantify the width of defects.
All above works have to utilize the wavelet transform technique to convert time-domain signals into images (scalogram) since traditional CNNs require high-resolution images as input. Such a process is still involved with the manual selection of parameters for wavelet transform and will be time-consuming with large volumes of raw data. In this paper, a CNN using raw time-domain data without the wavelet transform as input is designed to automatically extract features for the classification of ultrasonic signals. The approach is validated by the mockup with internal slots. The results demonstrate that our neural network architecture with independent features performs considerably better than the ones with handcrafted features. Finally, we visualize the shape, location and orientation of defects with a C-scan image based on classification results to validate the effectiveness of the results. Figure 1 shows the outline of the paper.

2. Experimental Setup

The sample used for the experiments is a circumferential weld composed of austenitic and martensitic stainless steel. The width and thickness of the weld were 10 mm and 13.6 mm, respectively, as shown in Figure 2a. Six slots were made by electrosparking every 60 degrees on the inner wall of the weld, as shown in Figure 2b. A 45-degree angle beam probe with 2 MHz center frequency was used to collect ultrasonic signals. The scanning area was within the range of 30 mm on both sides of the weld center line, as shown in Figure 2a. The probe stepped 1 mm along the direction perpendicular to the weld line (index axis) every scanning cycle till the end (scan axis) while the ultrasonic beam was parallel to circumferential direction.
The ultrasonic detection experimental setup used for data collection is shown in Figure 3. It included three systems: a 35 MHz broadband of the ultrasonic transmitter-receiver system (UTRS) for generating and receiving the ultrasonic pulse from the transducers, a computer system based on a LabVIEW program for storing the ultrasonic data from detected specimens and a probe motion control system (PMCS) whose precision was 1 mm. Ultrasonic waves were transmitted and received using the same transducer excited by UTRS. The ultrasonic echoes collected from the detected specimens were then converted to electrical signals which were stored in the computer system next. The ultrasonic data acquisition card adopted NI PCI-5153 manufactured by National Instruments (NI). The card can receive an ultrasonic signal with sampling frequency of 2 GHz and resolution of 8 Bit.
As a result, a database of 22,631 A-scan signals was acquired consisting of two classes: slot signals and nonslot signals. The database set was normalized between 0 and 1. Six thousand ultrasonic signals with equal numbers of slot signals and nonslot signals were statistically selected from the original database and randomly divided into two data sets called the training data set and testing data set. These two data sets were used to train and test neural networks where the number of training samples and testing samples was 5000 and 1000, respectively.

3. Method

The convolutional neural network is a type of deep neural network that has convolutional layers as well as fully connected layers. Convolution is a mathematical function that is widely used in the signal processing domain. In the convolutional neural network, convolutional layers actually utilize a cross-correlation technique that is technically very similar to convolution [15].
There are two important layers in CNNs, i.e., feature extraction layers and classification layers. Convolutional layers and pooling layers are feature-extracting layers, while classification layers are fully connected layers. Convolutional layers are not connected to every node in the input layer but to specific local regions based upon the defined filters/convolutional kernels. This architecture allows the network to concentrate on low-level features; these are then assembled into high-level features. CNN also has ability to learn a pattern at one location and then determine it in some other location. It is possible due to sharing the same parameter in filters. Pooling layer in the neural network is used to subsample the input that helps to reduce the computational load and also helps to avoid overfitting [15].
Accuracy was developed to evaluate the performance of the proposed CNN model on testing datasets. The accuracy of a dataset is defined as the number of correctly predicted observations divided by the total number of predictions made. Equation (1) gives the mathematical form of accuracy.
A c c u r a c y = T P + T N T P + T N + F P + F N  
where TP, TN, FP and FN stand for true positive, true negative, false positive and false negative, respectively.

3.1. Architecture of the Proposed CNN

The convolutional neural network adopted in this paper was constructed using TENSORFLOW framework. The hardware platform for computation was a desktop with Intel core-i5 9400 CPU, 16 GB RAM, 256 GB SSD. Figure 4 shows the structure of the proposed CNN model made up of two convolution layers, two pooling layers, two fully connected layers and one output layer.
The mean square error (MSE) was employed as loss function of the CNN, which is defined as
M S E = 1 N i = 1 N ( y ^ y ) 2  
where y ^ is the target value and y is the prediction value of the network based on the input data. To enhance generalization and reduce the possibility of overfitting, regularized loss function M S E r e g was used in training CNN after random initialization, which function is defined as
M S E r e g = γ M S E + 1 γ n j = 1 n w j 2  
where w j is the weight and γ is a hyperparameter set manually.
In addition, an attenuation learning rate was adopted with initial value 0.01 to avoid gradient disappearance due to large learning rate. Rectified linear unit defined as R e l u ( x ) = max ( 0 , x ) was applied to the activation function of the CNN. For making the CNN well robust, the moving average was applied for all training variables in the neural network.

3.2. Manually Selected Features

To demonstrate how much better the proposed method would perform than the handcrafted features method, a typical approach of handcrafted features was carried out for comparison.
In this approach, original signals were firstly processed by Fourier transform and wavelet transform to extract features manually in spectral domain and time-frequency domain, respectively. All extracted features were then used as input data to train a neural network. An eighteen-dimensional feature vector for each signal, v , could be obtained as follows:
(1)
Maximum amplitude in time domain: M a x T = max ( T i )
(2)
Minimum amplitude in time domain: M i n T = min ( T i )
(3)
Maximum difference in time domain: Δ T = M a x T M i n T
(4)
Standard deviation in time domain: S t d T = 1 n 1 i = 1 n ( T i T ¯ ) 2
(5)
Envelope area in time domain: S T = i = 1 n T i
(6)
Maximum amplitude in spectral domain: M a x F = max ( F i )
(7)
Minimum amplitude in spectral domain: M i n F = min ( F i )
(8)
Maximum difference in spectral domain: Δ F = M a x F M i n F
(9)
Standard deviation in spectral domain: S t d F = 1 n 1 i = 1 n ( F i F ¯ ) 2
(10)
Envelope area in spectral domain: S T = i = 1 n F i
where T i and F i are, respectively, amplitude values corresponding to i sampling point in time domain and spectral domain and T ¯ and F ¯   are, respectively, mean amplitude values of signals in time domain and spectral domain. The other eight features are ratios of the energy of each frequency band to the total energy after wavelet decomposition of signals in the third layer.

4. Results and Discussion

4.1. Effect of Number of Middle Fully Connected Layer Neurons on Accuracy

The number of middle fully connected layer neurons is a key parameter that affects the ultimate accuracy of trained CNNs. It can also determine the number of features automatically extracted by CNNs. Obviously, changes in the number of these features will have significant impacts on the recognition rate of the proposed network. The performances of neural networks with different numbers of middle layer neurons were calculated using the testing data set as shown in Figure 5. It can be seen that the accuracy values of neural networks are all above 0.95 and shows a jagged rise eventually tending to the maximum of 0.972 where the initial number of middle-layer neurons is 500. Detailed information about accuracy variation with different numbers of middle-layer neurons is shown in Table 1.
Based on the above research, a network structure with the configuration of 500 neurons in the middle fully connected layer was employed for signal identification in this task, which also implied that a 500-dimensional feature vector could characterize signals well enough from another perspective. Figure 6 shows the convergency curve of the loss function during the training of the network which could rapidly achieve convergency after only 80 epochs of 100 iterations.

4.2. Comparison between the Proposed Approach and the Handcrafted Features Approach

To validate the effectiveness of the proposed method, the handcrafted features approach was used for comparison. An eighteen-dimensional feature vector for each signal mentioned in Section 3.2 became an alternative, substituting original signals to train a classifier. The comparison results are shown in Table 2. The recognition rates in time domain, frequency domain, time-frequency domain and the fusion of the former three are listed respectively in the ‘Handcrafted approach’ column. It can be seen that testing accuracies in each individual domain are lower than the proposed method, but the fusion performance can slightly outperform the accuracy of the proposed method. However, the handcrafted features approach has to face the problem of severe workload burden with collected data increases. Balancing the efficiency and the accuracy, the proposed method is most suitable for industrial applications. It can be seen that accuracies in the manual extracting features method have a large fluctuation range from 0.796 to 0.981 by employing different features from Table 2. Such a problem can be avoided in the proposed method with good robustness.

4.3. Visualization of Classification Results

The ultrasonic signals can be displayed in different ways for defect determination. Although A-scan is the common representation in nondestructive testing, it is usually beneficial to obtain a two-dimensional representation through a C-scan imaging process for defect determination. The C-scan shows the top view of the sample parallel to the scanning surface. By means of integrating amplitudes of A-scan signals within a time range over the surface, the C-scan image is able to confirm the presence of a defect in the sample according to the changes in amplitude. However, the traditional C-scan imaging process is too sensitive to the random noise during the scanning procedure.
As an alternative for the traditional C-scan imaging process, our C-scan image is produced according to the following steps:
  • With the proposed classifier, the A-scan signal is indicated as a known type of defect classes which are provided as input for the C-scan imaging of the sample.
  • For signal points in each defect class, a connected component labeling is performed on the C-scan image to further determine the defect area on the sample.
  • We retain the points with neighbors belonging to the same class on the image and eliminate the others from the C-scan image as random noise.
Figure 7 is naive C-scan image generated based on the amplitude of original signals. There are a lot of noise signals in the whole weld area. It can be explained that the way to generate images seems too sensitive to the amplitude variation of original signals. The above-mentioned procedures are then used to form Figure 8, which consists of processed C-scan images based on the classification results of the (a) fusion method, (b) automatic method, (c) time domain method, (d) frequency domain method and (e) time-frequency domain method, respectively, corresponding to Table 2.
From Figure 8, some conclusions can be obtained as follows:
(1)
The distribution of defects in each C-scan image conforms well to real defects made in the mockup.
(2)
The image quality of defects gradually decreases from top to bottom in Figure 8, which is consistent with classification results.
(3)
In terms of image quality, Figure 8a–e are superior to Figure 7. The proposed method is able to provide a higher credibility of C-scan visualization for defect determination.
(4)
The proposed method still cannot completely remove noise signals and the next step is to develop effective denoising algorithms for more accurate defect identification.

5. Conclusions

Artificial intelligence, especially neural networks, has been increasingly utilized in ultrasonic signal classification. Conventional approaches necessarily require us to select and extract either time- or frequency-domain features from original ultrasonic signals manually for training the neural network model. However, this procedure is very subjective, empirical and time-consuming, and in most cases, it is very difficult to extract the appropriate features that are most relevant to the defects to be detected. In this paper, an automatic classification method using a convolutional neural network is proposed. The novelty of this work is to accomplish the automatic extraction of features from the original ultrasonic signals with raw time-domain data as input, avoiding the necessity and inaccuracy caused by manual feature selection.
An ultrasonic detection experimental setup was first used for data collection from the specimen with internal slots. Through the optimization of network structures, a network structure equipping the configuration of 500 neurons in fully connected network hidden layers was utilized for signal classification. To further validate the effectiveness of the proposed method, the handcrafted features either in the time- or frequency-domain approach were employed for comparison. The results demonstrate that the recognition rates in each individual domain are lower than the proposed method but the fusion performance can slightly outperform the accuracy of the proposed method. The proposed method shows good robustness and avoids the massive workload of manually extracting features. Finally, we visualized classification results to present and characterize detailed information about defects. Compared with the traditional C-scan based on amplitude of collected signals, our method is able to provide a higher credibility of C-scan visualization for defect determination.
The main objective of this work is to prove the feasibility and effectiveness of the proposed method, and therefore, only the specimen with internal slots is investigated. In real industrial applications, as long as sufficient data with regard to the various defect types (dimensions, orientations, locations and shapes) can be acquired, the developed method can achieve the accurate classification of corresponding defects.

Author Contributions

Conceptualization, Y.S.; Data curation, W.X.; Methodology, Y.S.; Validation, W.X.; Writing—original draft, Y.S.; Writing—review & editing, W.X., J.Z. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China (Grant No. 2018YFB1106100).

Data Availability Statement

The data presented in this article are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schmerr, L.W., Jr. Fundamentals of Ultrasonic Nondestructive Evaluation: A Modeling Approach; Springer Series in Measurement Science and Technology; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  2. Veiga, J.L.; De Carvalho, A.A.; Da Silva, I.C.; Rebello, J.M. The Use of Artificial Neural Network in the Classification of Pulse-Echo and TOFD UltraSonic Signals. J. Braz. Soc. Mech. Sci. 2005, 27, 394–398. [Google Scholar]
  3. Sambath, S.; Nagaraj, P.; Selvakumar, N. Automatic Defect Classification in Ultrasonic NDT Using Artificial Intelligence. J. Nondestruct. Eval. 2011, 30, 20–28. [Google Scholar] [CrossRef]
  4. Iyer, S.; Sinha, S.K.; Tittmann, B.R.; Pedrick, M.K. Ultrasonic signal processing methods for detection of defects in concrete pipes. Autom. Constr. 2012, 22, 135–148. [Google Scholar] [CrossRef]
  5. Masnata, A.; Sunseri, M. Neural network classification of flaws detected by ultrasonic means. NDT E Int. 1996, 29, 87–93. [Google Scholar] [CrossRef]
  6. Drai, R.; Khelil, M.; Benchaala, A. Time frequency and wavelet transform applied to selected problems in ultrasonics NDE. NDT E Int. 2002, 35, 567–572. [Google Scholar] [CrossRef]
  7. Liu, S.W.; Huang, J.H.; Sung, J.C.; Lee, C.C. Detection of cracks using neural networks and computational mechanics. Comput. Methods Appl. Mech. Eng. 2002, 191, 2831–2845. [Google Scholar] [CrossRef]
  8. Cau, F.; Fanni, A.; Montisci, A.; Testoni, P.; Usai, M. Artificial neural networks for non-destructive evaluation with ultrasonic waves in not accessible pipes. In Proceedings of the 2005 IEEE Industry Applications Conference, Hong Kong, China, 2–6 October 2005; pp. 685–692. [Google Scholar]
  9. Sahoo, A.K.; Zhang, Y.; Zuo, M.J. Estimating crack size and location in a steel plate using ultrasonic signals and CFBP neural networks. In Proceedings of the 2008 Canadian Conference on Electrical and Computer Engineering, Niagara Falls, ON, Canada, 4–7 May 2008. [Google Scholar]
  10. Yang, P.; Li, Q.F. Wavelet transform-based feature extraction for ultrasonic flaw signal classification. Neural Comput. Appl. 2014, 24, 817–826. [Google Scholar] [CrossRef]
  11. Chen, Y.; Ma, H.W.; Dong, M. Automatic classification of welding defects from ultrasonic signals using an SVM-based RBF neural network approach. Insight 2018, 60, 194–199. [Google Scholar] [CrossRef]
  12. Case, T.J.; Waag, R.C. Flaw identification from time and frequency features of ultrasonic waveforms. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 1996, 43, 592–600. [Google Scholar] [CrossRef]
  13. Meng, M.; Chua, Y.J.; Wouterson, E.; Ong, C.P. Ultrasonic signal classification and imaging system for composite materials via deep convolutional neural networks. Neurocomputing 2017, 257, 128–135. [Google Scholar] [CrossRef]
  14. Guo, S.; Feng, H.; Feng, W.; Lv, G.; Chen, D.; Liu, Y.; Wu, X. Automatic Quantification of Subsurface Defects by Analyzing Laser Ultrasonic Signals Using Convolutional Neural Networks and Wavelet Transform. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 68, 3216–3225. [Google Scholar] [CrossRef] [PubMed]
  15. Douglass, M.J.J. Hands-on Machine Learning with Scikit-Learn, Keras, and Tensorflow, 2nd ed.; O’Reilly Media, Inc.: Newton, MA, USA, 2020; Volume 43, pp. 1135–1136. [Google Scholar]
Figure 1. The workflow of our study on classifications of ultrasonic signals.
Figure 1. The workflow of our study on classifications of ultrasonic signals.
Applsci 12 04179 g001
Figure 2. The detected circumferential weld: (a) the side view of the mockup; (b) the cross-section view of the mockup.
Figure 2. The detected circumferential weld: (a) the side view of the mockup; (b) the cross-section view of the mockup.
Applsci 12 04179 g002
Figure 3. The ultrasonic detection experimental setup for data acquisition.
Figure 3. The ultrasonic detection experimental setup for data acquisition.
Applsci 12 04179 g003
Figure 4. The structure of the proposed CNN.
Figure 4. The structure of the proposed CNN.
Applsci 12 04179 g004
Figure 5. The accuracy variation of CNNs with the number of middle-layer neurons.
Figure 5. The accuracy variation of CNNs with the number of middle-layer neurons.
Applsci 12 04179 g005
Figure 6. The convergency curve of loss function during training of the final determined CNNs.
Figure 6. The convergency curve of loss function during training of the final determined CNNs.
Applsci 12 04179 g006
Figure 7. The C-scan image generated based on the amplitude of original A-scan signals obtained by detection.
Figure 7. The C-scan image generated based on the amplitude of original A-scan signals obtained by detection.
Applsci 12 04179 g007
Figure 8. Processed C-scan images based on different classification results: (a) fusion; (b) automatic method; (c) time domain; (d) time-frequency domain; (e) frequency domain.
Figure 8. Processed C-scan images based on different classification results: (a) fusion; (b) automatic method; (c) time domain; (d) time-frequency domain; (e) frequency domain.
Applsci 12 04179 g008
Table 1. The accuracy variation with the number of middle-layer neurons.
Table 1. The accuracy variation with the number of middle-layer neurons.
No. of the Middle Layer NeuronsAccuracy
100.958
500.962
1000.960
1500.965
2000.968
2500.965
3000.970
3500.967
4000.968
4500.970
5000.972
5500.970
6000.972
6500.972
Table 2. Performance comparison of the handcrafted features approach and the automatic approach.
Table 2. Performance comparison of the handcrafted features approach and the automatic approach.
MethodsHandcrafted ApproachAutomatic Approach
Time DomainFrequency DomainTime-Frequency DomainFusion
Accuracy0.9610.9030.9480.9880.982
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shi, Y.; Xu, W.; Zhang, J.; Li, X. Automated Classification of Ultrasonic Signal via a Convolutional Neural Network. Appl. Sci. 2022, 12, 4179. https://doi.org/10.3390/app12094179

AMA Style

Shi Y, Xu W, Zhang J, Li X. Automated Classification of Ultrasonic Signal via a Convolutional Neural Network. Applied Sciences. 2022; 12(9):4179. https://doi.org/10.3390/app12094179

Chicago/Turabian Style

Shi, Yakun, Wanli Xu, Jun Zhang, and Xiaohong Li. 2022. "Automated Classification of Ultrasonic Signal via a Convolutional Neural Network" Applied Sciences 12, no. 9: 4179. https://doi.org/10.3390/app12094179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop