Next Article in Journal
IACRA: Lifetime Optimization by Invulnerability-Aware Clustering Routing Algorithm Using Game-Theoretic Approach for Wsns
Previous Article in Journal
An Indoor-Monitoring LiDAR Sensor for Patients with Alzheimer Disease Residing in Long-Term Care Facilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recognition Method for Broiler Sound Signals Based on Multi-Domain Sound Features and Classification Model

1
School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou 213001, China
2
Electronic Engineering College, Heilongjiang University, Harbin 150080, China
3
Reliability Institute for Electric Apparatus and Electronics, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(20), 7935; https://doi.org/10.3390/s22207935
Submission received: 30 August 2022 / Revised: 2 October 2022 / Accepted: 12 October 2022 / Published: 18 October 2022
(This article belongs to the Section Intelligent Sensors)

Abstract

:
In view of the limited number of extracted sound features, the lack of in-depth analysis of applicable sound features, and the lack of in-depth study of the selection basis and optimization process of classification models in the existing broiler sound classification or recognition research, the author proposes a recognition method for broiler sound signals based on multi-domain sound features and classification models. The implementation process is divided into the training stage and the testing stage. In the training stage, the experimental area is built, and multiple segments of broiler sound signals are collected and filtered. Through sub-frame processing and endpoint detection, the combinations of start frames and end frames of multiple sound types in broiler sound signals are obtained. A total of sixty sound features from four aspects of time domain, frequency domain, Mel-Frequency Cepstral Coefficients (MFCC), and sparse representation are extracted from each frame signal to form multiple feature vectors. These feature vectors are labeled manually to build the data set. The min-max standardization method is used to process the data set, and the random forest is used to calculate the importance of sound features. Then, thirty sound features that contribute more to the classification effect of the classification model are retained. On this basis, the classification models based on seven classification algorithms are trained, the best-performing classification model based on k-Nearest Neighbor (kNN) is obtained, and its inherent parameters are optimized. Then, the optimal classification model is obtained. The test results show that the average classification accuracy achieved by the decision-tree-based classifier (abbreviated as DT classifier) on the data set before and after min–max standardization processing is improved by 0.6%, the average classification accuracy achieved by the DT classifier on the data set before and after feature selection is improved by 3.1%, the average classification accuracy achieved by the kNN-based classification model before and after parameter optimization is improved by 1.2%, and the highest classification accuracy is 94.16%. In the testing stage, for a segment of the broiler sound signal collected in the broiler captivity area, the combinations of the start frames and end frames of multiple sound types in the broiler sound signal are obtained through signal filtering, sub-frame processing, endpoint detection, and other steps. Thirty sound features are extracted from each frame signal to form the data set to be predicted. The optimal classification model is used to predict the labels of each piece of data in the data set to be predicted. By performing majority voting processing on the predicted labels of the data combination corresponding to each sound type, the common labels are obtained; that is, the predicted types are obtained. On this basis, the definition of recognition accuracy for broiler sound signals is proposed. The test results show that the classification accuracy achieved by the optimal classification model on the data set to be predicted is 93.57%, and the recognition accuracy achieved on the multiple segments of the broiler sound signals is 99.12%.

1. Introduction

Relevant studies have shown that animal sounds contain much emotional information, such as crows when they are hungry [1], screams when they are in pain [2,3], and weeps when they are separated [4]. Therefore, sounds can be used to provide feedback on their own body conditions and external environment changes, and many scholars have carried out relevant research. For example, Marx et al. [5] studied the voices of chicks in the process of grouping, including distress calls, short peeps, warblers, and pleasure notes, and found that the voices would change with the growth environment and social ability. Zeltner et al. [6] studied the fear reaction (especially the calls) of laying hens composed of the three different strains when they are attacked by a predator. Fontana et al. [7] conducted a study on the peak frequency of broiler sound signals and concluded that there is a significant correlation between the peak frequency of broiler sound signals and its own weight. The more they grow, the lower the frequency of the sounds made by the broilers.
During the growth of broilers, they are easily infected with diseases such as bronchitis, laryngotracheitis, avian influenza, pullorum, etc. The upper respiratory mucosa of these diseased broilers is strongly stimulated, which causes a low cough and makes them sluggish [8]. Therefore, the sounds represented by the cough can provide feedback on the mental state of the broiler and indirectly express their physical condition, that is, their health status. During the investigation of Huixing farm of Liushu Town, Linkou County, Mudanjiang City, Heilongjiang Province, the author and team members learned that experienced breeders used the phenomenon of “the sick chicken makes coughs” and then summarized an effective broiler health judgment method. That is, in the process of distributing feed to each broiler captivity area, the breeders listened to whether the broilers coughed to roughly determine whether there were diseased broilers in the current broiler captivity area. It should be noted that twenty-five to thirty broilers are usually cultivated in each broiler captivity area. In addition, bronchitis, laryngotracheitis, avian influenza, pullorum, and other diseases are contagious. Often, after a broiler is diseased, similar diseases will spread quickly in the broiler captivity area. Therefore, the timely detection of diseased broilers can greatly reduce losses [9].
It can be seen from the above-mentioned broiler health judgment method that accurately capturing and recognizing coughs in the broiler sound signals are key to judging the health of the broilers. In fact, by applying sensor technology and wireless communication technology, we can design an audio collection terminal and place it in the broiler captivity area to collect segments of the broiler sound signal and send it to a remote computer. On this basis, if the coughs can be accurately recognized from the broiler sound signal, it can realize automatic broiler health monitoring. Under normal circumstances, the broiler sound signal contains sound types that represent the growth status or physiological characteristics of the broilers, such as crowing, coughing, purring, and so on. Therefore, the accurate recognition and classification of the sound types contained in the broiler sound signal are key to achieving the accurate recognition of coughs.
Many scholars have conducted a series of studies on the recognition methods for broiler sound signals in different states or recognition methods for different sound types in the broiler sound signals. For example, Lee et al. [10] used the Support Vector Machine (SVM) as a classification model to identify the sound types while the laying hens suffered physical stress from changes in temperature and mental stress from fear. Cheng et al. [11] proposed a new-type chicken voice recognition method. This method used sparse representation to complete the noise reduction and feature extraction from the reconstructed voices. It also used SVM to the classify chicken voices under different environments. After extracting the features from the sound signals of Hailan brown laying hens, Yu et al. [12] analyzed the correlations between the calls and behaviors of the laying hens. In the subsequent research [13], several machine learning algorithms were used to classify and recognize the different sound signals of the laying hens, and the J48 decision tree proved to achieve the highest classification accuracy. Cao et al. [14], in the same research group, used the principle that different sound types have different power spectral densities to carry out research on the classification and identification of the calls and fan noise of Hailan brown laying hens. This research helped to realize the detection and extraction of animal sounds in a noisy environment.
It can be seen from the above studies that the current research on broiler sound signal classification is mainly concentrated on the extraction of sound features, including the extraction of the sound features that can distinguish the different sound types in broiler sound signals and the extraction of the sound features that can distinguish the sounds or calls made by broilers from the environmental noises. On this basis, they built the data set and trained the classification models based on machine learning classification algorithms to complete the recognition of different sound types or different sounds. Although these methods have achieved satisfactory results during the early development process, three problems still exist. The first problem is the extraction of sound features. In these studies, they chose to extract one or a few commonly used sound features from the time domain or the frequency domain from the sound signals to build the data sets without considering the sound features that can be extracted from multi-domains and extracting more sound features. The second problem is the analysis of the sound features. They did not analyze whether the selected sound features are applicable to the current classification problem. In general, the selected general sound features often perform poorly in dealing with personalized problems. The third problem is the training of the classification model. They chose to train the classification model based on a certain machine learning classification algorithm according to experience but did not provide the selection basis. Meanwhile, they did not optimize the inherent parameters of the trained classification model to improve its classification effects further.
It is worth mentioning that, in addition to SVM and the decision tree mentioned above, when combined with some optimization algorithms, new classification methods also show their skills in solving classification problems in various fields, such as support matrix machines, multi-class fuzzy support matrix machines, deep stacked support matrix machines, Twin robust matrix machines, and so on. For example, Xu et al. [15] used support matrix machines, double support matrix machines, and near-end support matrix machines to solve the classification problem of two-dimensional image data. Experiments have shown that the three algorithms achieved stable and efficient classification results. Horng [16] applied a multi-class support vector machine classifier to solve the image classification problem of the supraspinatus muscle and achieved better performance than the other implementation methods. By establishing the objective function of a non-parallel hyperplane and integrating fuzzy attributes, Pan et al. [17] proposed a multi-class fuzzy support matrix machine (MFSMM), which was used to solve the classification problem of two kinds of roller-bearing experimental data. Hang et al. proposed a new deep stacked support matrix machine (DSSMM) to improve the performance of existing shallow matrix classifiers in EEG classification. Pan et al. [18] proposed a new non-parallel classifier called the twin robust matrix machine (TRMM) and applied it to roller-bearing fault diagnosis. The experimental results show that the method has excellent fault diagnosis performance, especially in the presence of abnormal samples.
On the basis of summarizing the previous research results, combined with the shortcomings of the existing broiler sound signal classification research, in this paper, the author proposes a recognition method for broiler sound signals based on multi-domain sound features and classification models. Specifically, in view of the limited sound features extracted in the existing broiler sound signal recognition methods, we extracted a total of sixty sound features from four domains, including the time domain, frequency domain, Mel-Frequency Cepstral Coefficients (MFCC), and sparse representation. These sound features can fully describe the differences between the different sound types in the broiler sound signals. In view of the lack of in-depth analysis of sound features in the existing broiler sound signal recognition methods, we used a random forest to calculate the importance of the sound features and retain those sound features that can effectively reflect the differences among the sound types to build the high-quality data set. In order to solve the problem of lacking in-depth comparison and the parameter optimization of the classification models in the existing broiler sound signal recognition methods, we trained the classification models based on different machine learning classification algorithms, respectively, and compared them to obtain the best performer and optimize its inherent parameters to improve its classification effects. In this way, the optimal classification model can accurately and effectively recognize each sound type in a segment of the broiler sound signal and complete the cough recognition at the same time in this process. This method is an important basis for follow-up research on automatic broiler health monitoring.
We summarize the contributions of this paper in the following:
(1)
Extract multiple sound features from the four domains of time domain, frequency domain, MFCC, and sparse representation to fully describe the differences between the sound types in the broiler sound signals.
(2)
Use random forest to calculate the importance of each sound feature and retain those sound features that contribute greatly to the classification effects of the classification model to build a high-quality data set.
(3)
Train classification models based on different classification algorithms, compare them to obtain the best performer and optimize their inherent parameters to obtain the optimal classification model.
(4)
Combine the prediction results of the classification model and majority voting processing, provide a recognition process for sound types, and newly propose the definition of the recognition accuracy for broiler sound signals.

2. Methods

In order to carry out the research on the recognition method for the broiler sound signals based on multi-domain sound features and a classification model, we divided the whole research process into two stages, as shown in Figure 1. The first is the training stage. In this stage, we used the audio collection system to collect multiple segments of the broiler sound signals in the broiler captivity area and performed signal filtering on the original sound signals. Through sub-frame processing and endpoint detection, we processed each segment of the broiler sound signal into multiple frame signals and obtained the combinations of multiple frame signals corresponding to each sound type. We extracted multiple sound features from each frame signal to construct multiple feature vectors and obtained multiple pieces of data. The labels of the data were manually labeled by determining the sound type corresponding to the combination of each frame signal. In this way, a plurality of labeled data can be obtained from the multiple frame signals contained in the multiple segments of the broiler sound signals, and the data set can be constructed. On this basis, the classification models based on different machine learning classification algorithms were trained, and the inherent parameters of the one with the best performance were optimized to attain the optimal classification model. The second is the testing stage. For a segment of the broiler sound signal collected in the broiler captivity area, after signal filtering, we performed sub-frame processing and endpoint detection to obtain multiple frame signals and the combinations of multiple frame signals corresponding to each sound type. By extracting the sound features from each frame signal, a number of unlabeled data and combinations of unlabeled data corresponding to each sound type can be obtained. We applied the optimal classification model to predict the labels of these data. Through majority voting processing, the common labels of a plurality of unlabeled data corresponding to each sound type were determined. At present, the sound types contained in a segment of the broiler sound signal are recognized; that is, the broiler sound signal recognition method proposed in this paper is completed. The above two stages are described in detail in the rest of this section.

2.1. Broiler Sound Signal Collection

The broiler sound signals were collected in the No. 3 breeding greenhouse in the Huixing Farm (44°91′ N, 130°02′ E), Liushu Town, Linkou County, Mudanjiang City, Heilongjiang Province, China, from 19 to 20 August 2022. This experiment was approved by the Laboratory Animal Ethics Committee of Heilongjiang University. The approval number is 20220811007, and the approval date was 18 August 2022. The basic situation of the No. 3 breeding shed is: east–west direction, 122.0 m long, 18.8 m wide, 4.6 m roof height, single-story, and the layout of the shed is four rows of walkways. At the westernmost entrance of the shed, there is a reserved space with a length of 3.2 m and a width of 18.8 m, which is used for walking and placing breeding feed. The author used a fence to enclose an area of 1.2 m long and 2 m wide in the southwest corner as the experimental area and covered the area with 50-to-100-mm-thick wood chips to be used as bedding. This area is about two meters away from the nearest broiler captivity area. It should be noted that when the author directly arranged the audio collection system in the broiler captivity area in the breeding greenhouse, because the broilers are afraid of strangers and electronic equipment, the broilers close to the audio collection system would scream or flap their wings. These abnormal and high-frequency sounds would quickly affect other broilers around, causing the collected signals to not truly reflect the growth conditions of the broilers in the captivity area. Under normal growth conditions, the broilers are in a quiet state, and individual broilers occasionally crow, cough, etc., but are not contagious. However, most of the sound signals collected in these abnormal growth conditions are overlapping broiler sound signals, which are not conducive to the early development of this research. Therefore, the author built an experimental area far away from the broiler captivity area in order to collect continuous single broiler sound signals without overlapping. This is friendly and helpful for the early development of this study.
The specific breed of broiler in the No. 3 breeding greenhouse is the white-feather broiler. Generally, its incubation period is twenty-one days (about three weeks), and the growth period is fifty days (about seven weeks). The test objects selected by the author are thirty white-feather broilers of the same batch of eight weeks old, twenty of which are healthy broilers and ten of which are diseased broilers. They were placed in the experimental area individually in turn, and the audio collection system was used to collect the sound signals. It should be noted that the healthy broilers and the diseased broilers were selected together because only the sound signals of diseased broilers contain coughs. The audio collection system used the chassis of a National Instruments Co., Ltd. (Austin, TX, USA), and its specific model is PXI-1050. For the chassis, the model of the internal controller is PXI-8196, and the model of the sound capture card is NI4472B (8-channel synchronous collection, 24 bits resolution, and 102.4 kS/s sampling frequency). The sound sensor selected was the MPA201 model from BSWA TECHNOLOGY CO., LTD (Beijing, China); the response frequency is 20 Hz to 20,000 Hz, and the sensitivity is 50 mV/Pa. The sound recording software chosen was NI Sound and Vibration Assistant 2010, whose sampling frequency is 32 kHz and the sampling accuracy is 16 bits, and the monophonic collection is performed. The audio collection system collected a segment of the sound signal in a time unit of five minutes, and the interval between two adjacent sound signals was thirty seconds, and the collected sound signals were stored in the computer in the “*.tdms” format. Reference [19] contains the specific sound collection and procession information mentioned above. Finally, the author collected fifty segments of sound signals from the healthy broilers and twenty segments of sound signals from the diseased broilers.
In fact, because the experimental area was relatively far from the broiler breeding area, and the wood chips were laid in advance, the broiler sound signals collected by the audio collection system contained less background noise. In addition, the author copied the collected broiler sound signals to the computer and performed signal filtering based on the Wiener filtering method to obtain sound signals with a high signal-to-noise ratio (SNR) that can be used for sound feature extraction. That is, the broiler sound signals mentioned in the following description, unless otherwise specified, are all sound signals after signal filtering. This part of the research is summarized in another academic article [19], which will not be described in detail here. For each five-minute sound signal, the author performed pulse extraction and endpoint detection on them. By setting the detection threshold to 1.5 times the total energy of the leading non-segment frame [20], the effective signal pulse can be reserved. Through the artificial recognition of the broiler sound signals saved after pulse extraction and endpoint detection, the sound types that can be detected include crows, coughs, purrs, and flapping wings. Among them, the crow is the natural short cheeping during the growth of healthy broilers, and the sound is relatively loud and sharp. The cough is the prolonged abnormal croak of the diseased broilers, and the sound is relatively low. The purr is a low, continuous, undulating sound made when a foreign body is stuck in the throat of the broiler. The flapping wings sound is produced by inciting the friction and vibration between the wings and the air when the broiler is moving. The sound amplitude is large, and the continuous time is long. At present, the signal or the data source conditions for carrying out the research on recognition methods for broiler sound signals based on multi-domain sound features and classification models are available.

2.2. Sound Feature Extraction

In “Broiler Sound Signal Collection”, it can be understood that the sound signal processed by pulse extraction and endpoint detection contains four sound types, which indicates that the classification model this research ultimately needs to build is a four-classification model. In this part, the author mainly introduces the sound features from the four aspects of the time domain, frequency domain, MFCC, and sparse representation, which can be used for broiler sound signal classification and recognition.

2.2.1. Time Domain Features

The time domain features refer to some features related to the time characteristics in the process of signal changes with time [21]. Therefore, the broiler sound signal can be analyzed in the time domain. Suppose the sound signal is x n , the sound signal of the i -th frame after sub-frame processing is x i n , L is the frame length, and f n is the total number of frames.
(1)
The short-term energy is used to characterize the energy of the sound signal, which is defined as:
E n i = n = 0 L x i 2 n , 1 i f n
(2)
The short-term average zero-crossing rate refers to the number of times each frame signal crosses the horizontal axis (zero value), which is defined as:
Z c r i = 1 2 n = 0 L s g n x i n s g n x i n 1   , 1 i f n
(3)
The short-term autocorrelation function is used to describe the degree of cross-correlation between the sound signal and itself at different time points, which is defined as:
R i k = n = 0 L k 1 x i n x i n + k ,   1 i f n
In the formula, k is the amount of delay.
(4)
Since the autocorrelation calculation process involves multiplication and requires a lot of time overhead, the difference calculation method is further used to propose the short-term average amplitude difference, which is defined as:
D i k = n = 0 L k 1 x i n + k x i n ,   1 i f n
(5)
Different from the sum of the squares of short-term energy, the author also uses the short-term average amplitude of the absolute value to describe the energy of the sound signal, which is defined as:
M i = n L 1 x i n , 1 i f n
According to the above-mentioned time domain features, the time domain waveform diagram of the four sound types can be obtained, as shown in Figure 2. It can be seen from the figure that the purr can be effectively distinguished from the flapping wings in the time domain, but it is difficult to distinguish the cough and the crow in the time domain, which requires the further introduction of frequency domain features.

2.2.2. Frequency Domain Features

Since the broiler sound signal is a one-dimensional time domain signal, it is difficult to visually see the law of frequency changes. Therefore, we considered using Fourier Transform to transform it into the frequency domain for analysis [22]. Suppose the sound signal is x n , and the sound signal of the i -th frame after sub-frame processing is x i n . After the Fast Fourier Transform (FFT) [23], N is the FFT length.
(1)
Spectral entropy describes the relationship between the power spectrum and entropy rate, which is defined as:
H i = k = 0 N / 2 p i k l o g p i k
In the formula, p i k is the probability density corresponding to the k -th frequency component of the i -th frame.
(2)
The spectral centroid is the centroid of the frequency components, which is defined as:
f g = f 1 f 2 f · X f d f f 1 f 2 X f d f
In the formula, X f is the frequency amplitude spectrum of the signal, f 1 is the upper cut-off frequency of the spectrum, f 2 is the lower cut-off frequency of the spectrum, and the frequency range of the sound signal can be seen from the spectral centroid.
(3)
The root-mean-square frequency is used to obtain the root-mean-square of the spectrum, which is defined as:
R M S F = i = N 1 N 2 f i 2 · S i i = N 1 N 2 S i
(4)
The frequency standard deviation is used to obtain the variance of the spectrum, which is defined as:
R V F = i = N 1 N 2 f i 2 · S i i = N 1 N 2 S i ( i = N 1 N 2 f i · S i i = N 1 N 2 S i ) 2
In the formula, S i = | X i | 2 , which represents the energy spectrum of the signal.
According to the above frequency domain features, the frequency domain waveform diagram of four sound types can be obtained, as shown in Figure 3. It can be seen from the figure that the frequency of the cough is about 1300 Hz to 1900 Hz, and the frequency of the crow is about 1000 Hz to 1300 Hz. Therefore, the above two sound types, which cannot be clearly distinguished in the time domain, can be distinguished well in the frequency domain. In summary, five-dimensional time domain features and four-dimensional frequency domain features can be extracted from the broiler sound signals.

2.2.3. Mel-Frequency Cepstral Coefficients

According to the research on human ear hearing mechanisms, the human ear has different hearing sensitivities to sound signals of different frequencies. The Mel frequency domain takes into account the non-linear characteristics of the frequency perception of the cochlear basal mode in the human auditory system, which has high resolution in the low-frequency region and low resolution in the high-frequency region. It is a simple way to realize the auditory perception domain. The MFCC(s) we used are the cepstrum parameters extracted from the Mel frequency domain; that is, the MFCC also takes into account the characteristics of human hearing sensitivity. It first maps the linear spectrum to the Mel non-linear spectrum based on auditory perception and then converts it to the cepstrum [24,25]. Formula (10) converts the frequency to the Mel scale, and Formula (11) reconverts the Mel scale to frequency. The block diagram of the MFCC feature extraction is shown in Figure 4.
f m e l = 2595 l o g 10 1 + f / 700
f = 700 ( 10 f m e l 2595 1 )
In the research, the 1st to the 13th cepstrum coefficients after Discrete Cosine Transform (DTC) were generally used as the standard 13-dimensional MFCC parameters because they reflected the static characteristics of the sound signal. The dynamic characteristics of the sound signal can be obtained by the difference in the static characteristics. The first-order difference in the static characteristics reflects the changing speed of the sound signal, and the second-order difference reflects the changing acceleration of the sound signal. The author combined the standard MFCC parameters with the first-order difference and the second-order difference to obtain a total of 39-dimensional MFCC feature parameters, that is, thirty-nine MFCC features.

2.2.4. Sparse Representation

The purpose of the sparse representation of the signal is to use as few atoms as possible to represent the signal in a given over-complete dictionary so that a more concise representation of the signal can be obtained; thus, we can more easily obtain the information contained in the signal [26,27]. Therefore, sparse representation is essentially an optimization problem, and the greedy algorithm is the commonly used method [28]. It can be seen from the literature that scholars have conducted several pieces of research on the use of sparse representation to extract signal features. For example, Whitaker et al. [29] applied it to the study of extracting sound features from the sound spectrogram. Cheng et al. [30] used it to complete the noise reduction and feature extraction of chicken sound signals. Li et al. [31] used it to extract the fault features with high information and high value. Therefore, the author also used sparse representation to study the broiler sound signal.
The Matching Pursuit (MP) algorithm is the earliest and most representative greedy algorithm. Its main idea is to use as few atoms as possible to linearly represent the input signal from the given over-complete dictionary based on a certain similarity measurement criterion, thus, achieving an approximation of the input signal [32]. The disadvantage of the MP algorithm is that in the optimization process, a certain atom may be repeatedly selected, resulting in the poor convergence of the algorithm. Therefore, we further introduce the Orthogonal Matching Pursuit (OMP) algorithm. Compared with the MP algorithm, the OMP algorithm orthogonalizes the matched atoms and all the previously selected atoms in each step of the decomposition. In terms of the decomposition effect, both two use the same number of atoms to approximate the original signal. However, in terms of the accuracy of the sparse representation signal, the OMP algorithm is better than the MP algorithm, but the decomposition time of the OMP algorithm is slightly longer than the MP algorithm. We once again introduce the Genetic Algorithm (GA) to optimize the OMP algorithm and finally form the GA–OMP algorithm used in this research. Figure 5 is the flowchart of the GA–OMP algorithm.
When GA–OMP performs signal reconstruction, in order to better describe the time-varying characteristics of the signal, a complete time frequency atomic dictionary is usually used. Since the Gabor dictionary has better time-frequency characteristics [33], we chose it as the over-complete dictionary in this research. The formula is:
g r t = 1 s g ( t μ s ) c o s v t + ω
In the formula, g r t = e π t 2 is the Gaussian window function, γ = s , μ , v , ω is the time frequency parameter, and s is the expansion factor, μ is the translation factor, v is the frequency factor, and ω is the phase factor.
The space of the time-frequency parameters can be separated into γ = α j , p α j Δ μ , k α j Δ v , i Δ ω . In the formula, α = 2 , Δ μ = 1 / 2 ,   Δ v = π ,   Δ ω = π / 6 ,   0 < j l o g 2 N ,   0 p < N 2 j + 1 ,   0 k < 2 j + 1 ,   0 i 12 , N is the number of samples in each frame. Since the Gabor dictionary has good time-frequency characteristics, we can perform feature extraction on its time-frequency parameters and thus obtain four-dimensional time-frequency features, which are called four sparse representation features.
Since sparse reconstruction is required when extracting features, if the number of matching atoms is not determined, although the signal quality after reconstruction is improved, the processing time will be relatively long for computers with average performance. Many scholars limit the number of atoms to between twenty and forty. The author attempted to reconstruct the broiler sound signal with thirty atoms first and found that using thirty atoms can reconstruct the purr very well, but the reconstruction effect on the crow, cough, and flapping wings is not good. Furthermore, fifty and one hundred atoms were selected to reconstruct the signal, then the comparison chart of the reconstructed signals by using different numbers of atoms was obtained, as shown in Figure 6. It can be found that using thirty atoms to reconstruct the purr is better, using fifty atoms to reconstruct the cough is better, and using one hundred atoms to reconstruct the crow and flapping wings is better. Considering that the number of matching atoms of the broiler sound signals must be the same, the author uses thirty, fifty, and one hundred atoms to reconstruct the different sound types, respectively, and further extracted features. Therefore, a total of twelve sparse representation features can be obtained.

2.3. Feature Optimization

As mentioned above, the author extracted sixty sound features from the four aspects of the time domain, frequency domain, MFCC, and sparse representation, as shown in Table 1.
As described in Section 2.1, the author collected seventy segments of broiler sound signals in the No. 3 breeding greenhouse using the existing audio collection system. Similarly, we performed signal filtering on these signals based on the Wiener filtering method to obtain the sound signals with high SNR, which can be used for feature extraction. We processed each broiler sound signal into multiple frame signals and used endpoint detection to obtain the start frames and end frames corresponding to the multiple sound types in each broiler sound signal. Then, all of the frame signals between each group of the start frames and the end frames are considered to belong to the same sound type. Then, all of the frame signals between every start frame and end frame can be regarded as belonging to the same sound type. For example, after performing sub-frame processing on the first five-minute sound signal, the author used endpoint detection to obtain the tenth frame as the start frame and the fifty-fifth frame as the end frame of the crow. Then, the frame signals between the tenth and the fifty-fifth were considered to belong to the crow, and 46 pieces of data labeled as “crow” were extracted. In the same way, the author detected the combinations of the start frames and end frames corresponding to multiple sound types from the above-mentioned seventy segments of broiler sound signals, representing the crow, the cough, the purr, and the flapping wing, respectively. Through selection, the author attained multiple combinations of the start frames and end frames and obtained the corresponding labels. Among them, there are 100 crows, 100 coughs, 50 purrs, and 20 flapping wings. This corresponds to 5918 crow frame signals, 4486 cough frame signals, 8955 purr frame signals, and 9532 flapping wing frame signals. Each frame signal can calculate the specific values of sixty sound features to form a piece of data. Finally, the above-mentioned multiple frame signals formed multiple pieces of data to establish the preliminary data set. The specific description of the preliminary data set is shown in Table 2. It should be noted that we divided the data set into a training set (only including training data) and a testing set (only including testing data) according to a 3:1 ratio. The training set was used to train the classification model and optimize its inherent parameters, and the testing set was used to evaluate the classification effects of the classification models.
In order to eliminate the influence of the unit and scale differences between the features, we chose the min–max standardization method to process the preliminary data set [34]. The calculation formula is:
m = x x m i n / x m a x x m i n
In the formula, m is the value after standardization, x is the value before standardization, x m i n is the minimum value of in column, and x m a x is the maximum value in the column.
Based on the preliminary data set, the author performed min–max standardization processing to obtain the corresponding processed data set and chose the Decision-Tree-based classifier (abbr. DT classifier) under the default parameter configuration to make predictions on the data set before and after the standardization processing. The DT classifier performed ten predictions on each data set, and we took the average of the classification accuracies obtained from the ten predictions as the final prediction result to reduce the random influence. The prediction results are shown in Table 3. It can be seen from the table that the average classification accuracy achieved by the DT classifier on the data set after min–max standardization processing is slightly higher than that on the data set before processing, with an increase of about 0.6%.
Sixty sound features will inevitably bring about the problem of large computational complexity, and there will be some sound features that do not contribute much to the improvement of the classification effects of the classifier (classification model). It may even neutralize the contributions made by those outstanding sound features. Therefore, the author used the model-based feature selection method to filter the sixty sound features. Because a random forest has a mechanism to evaluate the feature importance by using its own classification accuracy [35], the author used it to filter the above-mentioned sixty sound features. The method of using a random forest to calculate the feature importance is as follows:
(1)
Using the out-of-bag data to calculate the out-of-bag error for each decision tree in the random forest and record it as err1.
(2)
Performing noise interference on a certain feature of the sample in the out-of-bag data, and then calculating the out-of-bag error and recording it as err2.
(3)
Assuming that there are N decision trees in the random forest, the importance of this feature can be expressed as:
i m p o r t a n c e = e r r 2 e r r 1 N
We take this calculation result as the judgment of the feature’s importance because if the out-of-bag error of a feature before and after the noise interference is large, it means that this feature has a greater impact on the classification accuracy of the sample, which also shows from the side that its importance is higher.
Based on the obtained feature importance, the steps of feature selection are as follows:
Step 1: Calculate the importance of each feature and sort in descending order.
Step 2: Set one feature to be removed each time, according to the ranking of feature importance, and remove the feature with the worst importance so that a new feature data set can be obtained.
Step 3: Use the new feature data set to train a new random forest and recalculate the importance of each feature and rank it.
Step 4: Repeat the above process until the optimal feature data set is left.
By setting the lowest threshold of feature importance, the process of removing the sound features with the least importance can be ended. Finally, a total of thirty sound features were retained, and the specific description is shown in Table 4.
According to the feature selection results, which are shown in Table 4, the author retained the feature data in the data set to form the final data set for training the classification model. We also applied the DT classifier under the default parameter configuration to make predictions on the data set composed of thirty sound features and sixty sound features, respectively, and we performed ten predictions to reduce the impact of random errors. The prediction results are shown in Table 5.
It can be seen from the table that when we used the DT classifier to make predictions, whether the obtained single classification accuracy or the obtained average classification accuracy, the classification accuracy achieved by the DT classifier on the data set composed of thirty sound features is higher than that composed of sixty sound features. In general, the classification accuracy achieved by the DT classifier on the data set composed of thirty sound features is about 3.1% higher than that composed of sixty sound features. Therefore, the author retained the selected thirty sound features to construct the final data set for training the classification model.

2.4. Training and Optimization of Classification Models

The classification algorithms commonly used in machine learning include SVM [36], decision tree [37], random forest [38], naive Bayes [39], and kNN [40]. With the help of performance evaluation indexes, the author applied the above classification algorithms to train the classification models on the training set and used the trained classification models to make predictions on the testing set. In order to obtain relatively accurate prediction results, each classification model was subjected to ten tests to reduce the impact of random errors. It is worth noting that in the research process, the author also drew multiple time-frequency maps representing the four sound types to build the picture data set and, on this basis, trained the general neural network models, including the Back-Propagation neural network (BP-NN) [41] and the Convolutional neural network (CNN) [42]. Unfortunately, they did not achieve satisfactory prediction results and spent more time obtaining these prediction results.
Table 6 lists the classification accuracies achieved by the above seven classification algorithms, and all of the tests were conducted on the same computer. From the average classification accuracy, we can see that the average classification accuracy obtained by kNN, random forest, SVM, decision tree, naive Bayes, CNN, and BP-NN decreased successively, which were 92.94%, 90.57%, 88.30%, 81.29%, 73.74%, 67.23%, and 64.56%, respectively. Among them, the average classification accuracy obtained by kNN is the highest. The average classification accuracy of the random forest is also higher, but there is a certain gap compared with kNN. The classical SVM is mainly used for binary classification, and thus the obtained average classification accuracy is slightly lower when it is applied to the multi-classification problem. The average classification accuracy obtained by naive Bayes is the worst because naive Bayes is a classification method based on the independent assumption of the feature conditions. However, in the classification and recognition of the broiler sound signals in this research, all of the sound features are not completely independent, and thus the obtained classification accuracy is poor. The average classification accuracy achieved by the two neural network models is significantly lower than that of the other five traditional machine learning classification algorithms, which shows that the neural network model is not suitable for broiler sound signal recognition.
Table 7 shows the precision, recall, and F1 value obtained by the five classification algorithms on the four labels corresponding to the four sound types. It can be seen that the precision, recall, and F1 value achieved by kNN are relatively higher, all reaching above 88%. After combining the evaluation results of the multiple indexes in Table 6 and Table 7, we finally decided to choose kNN to train the optimal classification model. Next, we will further optimize the inherent parameters of kNN in order to obtain the optimal classification model for broiler sound signal recognition.
We mainly conducted a parameter optimization of the n_neighbor, weights, and metric of kNN [43]. n_neighbor refers to the selection of the k value, that is, the selected k number of the labeled data that is closest to the data to be predicted, and the selected labeled data are used for majority voting. The default k value is 5, and the commonly used k value range is from 1 to 15. weights refers to the weight of the majority voting, and the default setting is “uniform”, which means uniform weight. There is also an optional “distance” here, which means that the weight is equal to the reciprocal of the distance; that is, the labeled data that is closer to the data to be predicted is given a larger majority voting weight. metric refers to the distance measurement. The selectable values include “Euclidean”, “Manhattan”, and “Chebyshev”, which represent the Euclidean distance, Manhattan distance and Chebyshev distance, respectively. The more commonly used parameter optimization method is Grid Search. After specifying the parameter range, the Grid Search will traverse all of the parameter combinations for optimization until the optimal combination is obtained. Finally, the author obtained the optimal parameter combination of kNN, as shown in Table 8. On this basis, we trained the optimal classification model based on parameter-optimized kNN. Similarly, we introduced the optimal classification model into the testing set for ten predictions and took the average classification accuracy as its prediction result. Table 9 shows the average classification accuracies achieved by the classification model based on kNN before and after the parameter optimization of the testing set. It can be seen that the average classification accuracy achieved by the classification model (optimal classification model) based on the parameter-optimized kNN is 94.16%, which is about 1.2% higher than the former (ordinary classification model).

2.5. Majority Voting Processing

Through the above steps, the author finally achieved the optimal classification model based on the parameter-optimized kNN. In this way, when we use the audio collection system to collect a segment of the broiler sound signal randomly, it is unknown how many crows, coughs, purrs, and flapping wings are involved in this broiler sound signal. We also performed signal filtering based on the Wiener filtering method on this broiler sound signal and performed endpoint detection on the processed broiler sound signal to obtain multiple start frames and end frames of the multiple sound types. We regarded the corresponding start frame and end frame as a combination, and then the multiple frame signals are each a combination belonging to the same sound type. That is, the labels of the multiple data formed from one combination all belong to the same sound type. For example, the start frame of the first combination is the tenth frame, the end frame is the fifty-fifth frame, the start frame of the second combination is the sixty-seventh frame, and the end frame is the one-hundred-thirty-fifth frame. That is, the first combination has a total of forty-six frames, which corresponds to the data set that the first forty-six pieces of data belong to the same sound type and have the same label. The second combination has a total of sixty-eight frames, which corresponds to the data set that the forty-seventh to the one-hundred-fifteenth pieces of data belong to the same sound type and have the same label. We used the optimal classification model to predict the labels of the first forty-six pieces of data and determined the common labels of these forty-six pieces of data through majority voting processing, determining the corresponding sound type. Similarly, the corresponding sound type of the sixty-eight pieces of data of the second combination can be determined. In this way, we can first find how many specific sound types are included in the broiler sound signal through endpoint detection and then use the optimal classification model to predict the labels of multiple pieces of data corresponding to each sound type. The common labels were obtained by performing majority voting processing on the predicted labels, and the specific sound types can be recognized. To sum up, by combining the classification model and majority voting processing, we can finally recognize the specific sound types in a segment of the broiler sound signal.

2.6. Performance Evaluation Index

In order to evaluate the classification effects of the trained classification model, some evaluation indexes are needed to measure it. The author mainly selected the classification accuracy [44], precision [45], recall [46], and F value [47] to evaluate the classification effects achieved by the classification model on the data set.
Suppose the data set is D = x 1 , y 1 , x 2 , y 2 , , x m , y m , where y i is the true label corresponding to the data x i and f x i is the label predicted by the classification model f . The classification accuracy can be expressed as the ratio of the number of data that are correctly predicted to the total number of data, which is as follows:
a c c f : D = 1 m i = 1 m I f x i = y i
In the formula, I is the indicator function. Additionally, when f x i = y i , I f x i = y i = 1 .
For each label (category) in the data set, the confusion matrix [48] of the prediction results is shown in Table 10.
If we want to evaluate the classification effects of each label, we need to calculate the precision and recall. The precision represents the proportion of the true positive data in all of the data that were predicted to be positive, which is defined as follows:
P = T P T P + F P  
While recall represents the proportion of the data that are predicted to be positive in all of the true positive data, which is defined as follows:
R = T P T P + F N
The F value is a comprehensive index, which is the harmonic average of the precision rate and recall rate, and it is defined as follows:
F = 1 + β 2 × P × R β 2 × P × R
In the formula, β is used to balance the importance of precision and recall, and when β = 1 , it is called the F1 value.
Similarly, in order to evaluate the prediction effects of the trained classification model for broiler sound signal recognition, an evaluation index is needed to measure it. Referring to the definition of classification accuracy in machine learning, in this study, the author newly proposed recognition accuracy as the evaluation index to evaluate whether the classification model can accurately predict multiple sound types contained in each broiler sound signal.
It is supposed that a segment of the broiler sound signal P contains n sound types, and the true types of each sound type are expressed as T i i = 1 ,   2 ,   3 , , n . For this segment of the broiler sound signal, the data set to be predicted can be established by sub-frame processing, endpoint detection, sound feature extraction, and other steps. The data set contains n sets of data corresponding to n sound types. The predicted label of each data in the data set to be predicted is obtained by applying the classification model. Combined with majority voting processing, the common labels of the n sets of the data are obtained, and the corresponding predicted types of the n sound types are P i i = 1 ,   2 ,   3 , , n . Then, the recognition accuracy can be expressed as the ratio of the number of sound types that are correctly predicted to the total number of sound types, which is defined as:
S = 1 n i = 1 n I P i = T i
In the formula, I is the indicator function. Additionally, when P i = T i , I P i = T i = 1 .

3. Verification and Analysis

In this part, the author verified the recognition method for the broiler sound signals based on multi-domain sound features and a classification model, including two parts. The first part verifies the classification effects of the classification model. We used the audio collection system to collect forty segments of sound signals from healthy broilers and twenty segments of sound signals from diseased broilers in the No. 3 breeding greenhouse and performed signal filtering on them. Each sound type and their corresponding combinations of multiple frame signals in each broiler sound signal were obtained through sub-frame processing and endpoint detection. By extracting thirty sound features from each frame signal to obtain multiple pieces of data and labeling the data according to the corresponding sound type we obtained the verification data set. Through min–max standardization processing, the validation data set for classification model prediction is finally obtained. After statistics, the validation data set contains 11,982 pieces of data, including 2818 pieces of data labeled “crow”, 2690 pieces of data labeled “cough”, 3205 pieces of data labeled “purr”, and 3269 pieces of data labeled “flapping wing”. We applied the optimal classification model to make ten times predictions on the validation data set, and the obtained average classification accuracy is 93.57%. Compared with the average classification accuracy of 94.16% achieved by the optimal classification model in Table 9, which is only 0.59% lower. This fully proves the generalization and stability of the trained optimal classification model. At the same time, it also proves the feasibility and practicability of the broiler sound signal recognition method proposed in this paper.
The second part verifies the prediction effects of the classification model for sound type recognition in the broiler sound signals. Similarly, we used the audio collection system to collect a segment of the broiler sound signal in the No. 3 breeding greenhouse and performed signal filtering on it. Each sound type and their corresponding combinations of multiple frame signals in this broiler sound signal were obtained through sub-frame processing and endpoint detection. By extracting thirty sound features from each frame signal, multiple pieces of data were obtained, and the validation data set to be predicted was constructed, which was subject to min–max standardization processing. Figure 7 shows the validation data set to be predicted.
We applied the optimal classification model to predict the labels of each data in the validation data set to be predicted and performed majority voting processing on the predicted labels of multiple data corresponding to each sound type to obtain common labels. At present, the predicted types of multiple sound types in the broiler sound signal were obtained. As shown in Table 11, there are eight sound types in the validation data set to be predicted. The first column in the table is the serial number of the sound type, the second column is the number of data corresponding to the sound type, the third to sixth columns provide the prediction results of the classification model for multiple pieces of data, and the seventh column provides the common label of multiple pieces of data obtained through majority voting processing, that is, the predicted types of the corresponding sound type. The author artificially recognized this segment of the broiler sound signal and provided the true types of eight sound types in the eighth column. It can be found that the recognition accuracy achieved by the classification model is 100% for this segment of the broiler sound signal.
In addition, the author used the audio collection system to collect another ten segments of broiler sound signals in the No. 3 breeding greenhouse and performed signal filtering on them. Each sound type and their corresponding combinations of multiple frame signals in ten segments of broiler sound signals were obtained through sub-frame processing and endpoint detection. By extracting thirty sound features from each frame signal to obtain multiple pieces of data, ten verification data sets to be predicted were built, and min–max standardization processing was performed on them. We applied the optimal classification model to predict the labels of each data in the ten validation data sets to be predicted and performed majority voting processing on the predicted labels of multiple pieces of data corresponding to each sound type to obtain multiple common labels. At present, the predicted types of multiple sound types in ten segments of broiler sound signals have been obtained. Table 12 shows the recognition results of the ten segments of the broiler sound signals.
The recognition results show that the recognition is correct, except for a purr in the fourth broiler sound signal that was wrongly recognized as a cough. After calculation, we attained the average recognition accuracy of the optimal classification model based on the parameter-optimized kNN, which was 99.12%. Similarly, in this process, we manually recognized the true type of each sound type in the ten segments of the broiler sound signals as a reference. At present, the author has completed the whole verification process of the recognition method for the broiler sound signals based on multi-domain sound features and the classification model proposed in this paper. The classification model achieved an average classification accuracy of 93.57% and an average recognition accuracy of 99.12%, which fully shows that the classification model has good classification and recognition effects and strongly proves the feasibility and stability of the broiler sound signal recognition method proposed in this paper further.

4. Discussion

It is worth noting that Qadri et al. [49,50] advocated the clinical decision support system using a learning algorithm to further improve or reflect the application value of the proposed method. Therefore, shortly after the completion of this study, the author began to design the audio collection terminal placed in the broiler captivity area and visualization software. At present, the audio collection terminal and visualization software have been designed and preliminarily debugged. On this basis, the combination of the audio collection terminal, the broiler sound signal recognition method, and the visualization software can build a clinical decision support system for automatic broiler health monitoring mentioned in the “Introduction”. That is, the audio collection terminal obtains the broiler sound signals in the broiler captivity area and transmits it to the remote computer in a timely manner through wireless communication. The broiler sound signal recognition method completes the task of recognizing different sound types in the broiler sound signals and saves the recognition results locally. The visualization software reads the recognition results for the display. This part also includes the author’s future research content.
It should be noted that the ten predictions performed by the classification model in “Verification and Analysis” showed that a purr was misjudged as a cough in the fourth segment of the sound signal because the two sounded similar. However, the difference between the two is that the former is a persistent and continuous process, which means that a purr will eventually have more frame signals, while the latter is a relatively short process, which means there will be fewer frame signals. In the following research, the author used the audio collection system to obtain multiple segments of the broiler sound signals again and also constructed multiple data sets to be predicted through signal filtering, sub-frame processing, endpoint detection, and feature extraction. The author used the optimal classification model based on parameter-optimized kNN to make predictions on these data sets and combined the majority voting processing to obtain multiple recognition results. The recognition results also show that the classification model has reliable recognition effects, and there is no case that the cough is mistaken for the other three sound types. This shows that the classification model has a high sensitivity to the recognition of coughs, which is conducive to conducting follow-up research on automatic broiler health monitoring. That is, using the currently trained classification model will not miss or incorrectly recognize the coughs in the broiler sound signal, which is extremely important for the stable judgment of broiler health.
It is also worth noting that during the research process, the author found that there is still a small part of noise in the sound signal after signal filtering, but its influence is relatively small, and it can be clearly distinguished from the crow, cough, purr, and flapping wings. Therefore, in future research, on the one hand, the author considers deep signal filtering of this kind of noise, including the research of adaptive filtering algorithm, in order to completely filter out it. On the other hand, the author considers this kind of noise as one specific sound type, then extracts thirty sound features from it to construct the data as well and sets the labels of these data as “noise”. Therefore, the four-classification problem in this research is extended to a five-classification problem for further research. In addition, apart from the five machine learning algorithms listed in Table 7, in the following research, the author considers using an ensemble learning algorithm, such as XGBoost, to replace the existing kNN in order to obtain a classification model with higher prediction accuracy.

5. Conclusions

Sound signals can provide feedback on the emotional information of broilers and then feeds back their own physical conditions (health status). Therefore, broiler sound signals can be used to carry out research on automatic broiler health monitoring, and the accurate recognition of the sound types in the broiler sound signals is key to carrying out this research. Existing broiler sound recognition methods fail to extract enough sound features to feed back the characteristics of the broiler sound signal, fail to analyze and select those sound features that effectively represent the differences between different sounds or different sound types, and fail to deeply study the selection basis and optimization process of the classification models. To solve the above problems, the author proposes a recognition method for broiler sound signals based on multi-domain sound features and a classification model. Compared with the implementation process of the classifier in machine learning, the implementation process of the proposed method includes a training stage and a testing stage.
In the training stage, first, we used the audio collection system to collect multiple segments of broiler sound signals and performed signal filtering on these signals based on the Wiener filtering method. Second, we performed sub-frame processing and endpoint detection on the broiler sound signals. In this way, the broiler sound signal was converted into a plurality of frame signals, and the combinations of the start frames and the end frames of each sound type were obtained. Third, we extracted sixty sound features from each frame signal from four aspects, including the time domain, frequency domain, MFCC, and sparse representation, and constructed multiple pieces of data. The data set was constructed by manually labeling multiple pieces of data. Then, we performed min–max standardization processing on the data set and used random forests to calculate the importance of sixty sound features and then retained thirty more important sound features to build the high-quality data set. Finally, we trained the classification models based on seven classification algorithms and obtained the best-performing kNN to optimize its inherent parameters.
In the testing stage, we used the audio collection system to randomly collect a segment of the broiler sound signal in the broiler captivity area and also performed signal filtering, sub-frame processing, and endpoint detection on it. In this way, we can also attain the combinations of the start frames and the end frames of multiple sound types in the broiler sound signal. We extracted thirty selected sound features from each frame signal to build the data set to be predicted and used the optimal classification model to predict the labels of each data. By performing majority voting processing on the prediction results, we achieved the predicted types of each sound type in the broiler sound signal. On this basis, we newly propose the definition of recognition accuracy for broiler sound signals. Multiple verification results show that the classification accuracy achieved by the optimal classification model is 93.57%, and the obtained recognition accuracy is 99.12%. This study is an important basis for the follow-up research on automatic broiler health monitoring and has important reference value for the recognition of animal sound signals in similar fields.

Author Contributions

W.T.: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Writing—Original Draft, Writing—Review and Editing, Supervision, Project administration, Funding acquisition. G.W.: Conceptualization, Validation, Formal analysis, Investigation, Resources, Writing—Review and Editing, Visualization, Supervision, Project administration, Funding acquisition. Z.S.: Conceptualization, Methodology, Software, Investigation, Data Curation, Writing—Original Draft, Writing—Review and Editing, Project administration. S.X.: Software, Formal analysis, Data Curation, Visualization. Q.W.: Validation, Resources, Visualization, Supervision. M.Z.: Methodology, Investigation, Data Curation. All authors have read and agreed to the published version of the manuscript.

Funding

The research presented in this paper was supported by the National Natural Science Foundation of China (No. 51607059); the Key Research and Development Program of Jiangsu Province of China (No. BE2019317); the Natural Science Foundation of Heilongjiang Province, China (Nos. QC2017059 and JJ2020LH1310); the Heilongjiang Postdoctoral Sustentation Fund, China (No. LBH-Z16169); the Fundamental Research Funds for the Central Universities of Heilongjiang Province, China (Nos. HDRCCX-201604 and 2020-KYYWF-1006); the Cultivation of Scientific and Technological Achievements of Heilongjiang Provincial Department of Education (No. TSTAU-C2018016); the Qitaihe City Science and Technology Project (No. 20308C).

Institutional Review Board Statement

The animal study protocol was approved by the Laboratory Animal Ethics Committee of Heilongjiang University (protocol code was 20220811007, and the date of approval was 18 August 2022).

Informed Consent Statement

Not applicable.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors are grateful to Zhigang Sun for discussions and to Guotao Wang and Weige Tao for providing data and financial support. They also thank the anonymous reviewers for their critical and constructive review of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thomas, T.J.; Weary, D.M.; Appleby, M.C. Newborn and 5-week-old Calves Vocalize in Response to Milk Deprivation. Appl. Anim. Behav. Sci. 2001, 74, 165–173. [Google Scholar] [CrossRef]
  2. Zimmerman, P.H.; Koene, P.; van Hooff, J.A. Thwarting of Behaviour in Different Contexts and the Gakel-call in the Laying hen. Appl. Anim. Behav. Sci. 2000, 69, 255–264. [Google Scholar] [CrossRef]
  3. Weary, D.M.; Braithwaite, L.A.; Fraser, D. Vocal Response to Pain in Piglets. Appl. Anim. Behav. 1998, 56, 161–172. [Google Scholar] [CrossRef] [Green Version]
  4. Marchant-Forde, J.N.; Marchant-Forde, R.M.; Weary, D.M. Responses of Dairy Cows and Calves to Each Other’s Vocalisations After Early Separation. Appl. Anim. Behav. Sci. 2002, 78, 19–28. [Google Scholar] [CrossRef]
  5. Marx, G.; Leppelt, J.; Ellendorff, F. Vocalisation in Chicks (Gallus gallus Dom.) during Stepwise Social Isolation. Appl. Anim. Behav. Sci. 2001, 75, 61–74. [Google Scholar] [CrossRef]
  6. Zeltner, E.; Hirt, H. A Note on Fear Reaction of Three Different Genetic Strains of Laying Hens to a Simulated Hawk Attack in the Hen Run of a Free-range System. Appl. Anim. Behav. Sci. 2008, 113, 69–73. [Google Scholar] [CrossRef]
  7. Fontana, I.; Tullo, E.; Butterworth, A.; Guarino, M. An Innovative Approach to Predict the Growth in Intensive Poultry Farming. Comput. Electron. Agric. 2015, 119, 178–183. [Google Scholar] [CrossRef] [Green Version]
  8. Shen, T.Y. Prevention and Control Measures of Common Diseases in Broilers. Gansu Anim. Husb. Vet. Med. 2016, 46, 82–83. [Google Scholar]
  9. Wang, G.C. Prevention and Treatment of Common Respiratory Diseases in Broilers. Poult. Sci. 2021, 6, 60. [Google Scholar]
  10. Lee, J.; Noh, B.; Jang, S.; Park, D.; Chang, H.H. Stress Detection and Classification of Laying Hens by Sound Analysis. Asian-Australas. J. Anim. Sci. 2015, 28, 592–598. [Google Scholar] [CrossRef]
  11. Cheng, B.J.; Zhong, S.P. A Novel Chicken Voice Recognition Method Using the Orthogonal Matching Pursuit Algorithm. In Proceedings of the 2015 8th International Congress on Image and Signal Processing (CISP), Shenyang, China, 14–16 October 2015. [Google Scholar]
  12. Yu, L.G.; Teng, G.H.; Li, B.M.; Lao, F.D.; Xing, Y.Z. Development and Application of Audio Database for Laying Hens. Trans. Chin. Soc. Agric. Eng. 2012, 24, 150–156. [Google Scholar]
  13. Yu, L.G.; Teng, G.H.; Li, B.M.; Lao, F.D.; Cao, Y.F. Classification Methods of Vocalization for Laying Hens in Perch System. Trans. Chin. Soc. Agric. Mach. 2013, 44, 236–242. [Google Scholar]
  14. Cao, Y.F.; Chen, H.Q.; Teng, G.H.; Zhao, S.M.; Li, Q.W. Detection of Laying Hens Vocalization Based on Power Spectral Density. Trans. Chin. Soc. Agric. Mach. 2015, 46, 276–280 + 300. [Google Scholar]
  15. Xu, H.T.; Fan, L.Y.; Gao, X.Z. Projection Twin SMMs for 2D Image Data Classification. Neural Comput. Appl. 2015, 26, 91–100. [Google Scholar] [CrossRef]
  16. Horng, M.H. Multi-class Support Vector Machine for Classification of the Ultrasonic Images of Supraspinatus. Expert Syst. Appl. 2009, 36, 8124–8133. [Google Scholar] [CrossRef]
  17. Pan, H.Y.; Xu, H.F.; Zheng, J.D.; Su, J.; Tong, J.Y. Multi-class Fuzzy Support Matrix Machine for Classification in Roller Bearing Fault Diagnosis. Adv. Eng. Inform. 2021, 51, 101445. [Google Scholar] [CrossRef]
  18. Pan, H.Y.; Xu, H.F.; Zheng, J.D.; Tong, J.Y.; Cheng, J. Twin Robust Matrix Machine for Intelligent Fault Identification of Outlier Samples in Roller Bearing. Knowl.-Based Syst. 2022, 252, 109391. [Google Scholar] [CrossRef]
  19. Sun, Z.G.; Gao, M.M.; Wang, G.T.; Lv, B.Z.; He, C.L.; Teng, Y.R. Research on Evaluating the Filtering Method for Broiler Sound Signal from Multiple Perspectives. Animals 2021, 11, 2238. [Google Scholar] [CrossRef]
  20. Gao, L.Z.; Yan, H.Z.; Wang, G.T.; Wang, Q. Design of Signal Pulse Extraction Method for Remainder Detection Equipment. Electr. Energy Manag. Technol. 2019, 10, 21–26 + 73. [Google Scholar]
  21. Chen, F. Audio Identification and Authentication Based on Digital Fingerprinting. Master’s Thesis, Fudan University, Shanghai, China, 2008. [Google Scholar]
  22. Raveendra, K.; Ravi, J. Performance Evaluation of Face Recognition system by Concatenation of Spatial and Transformation Domain Features. Int. J. Comput. Netw. Inf. Secur. 2021, 13, 47–60. [Google Scholar]
  23. Morhac, M.; Matousek, V. Fast Adaptive Fourier-based Transform and its Use in Multidimensional Data Compression. Signal Process. 1998, 68, 141–153. [Google Scholar] [CrossRef]
  24. Arpitha, Y.; Madhumathi, G.L.; Balaji, N. Spectrogram Analysis of ECG Signal and Classification Efficiency Using MFCC Feature Extraction Technique. J. Ambient Intell. Humaniz. Comput. 2022, 13, 757–767. [Google Scholar] [CrossRef]
  25. Fahad, M.; Shah, D.A.; Pradhan, G.; Yadav, J. DNN-HMM-Based Speaker-Adaptive Emotion Recognition Using MFCC and Epoch-Based Features. Circuits Syst. Signal Process. 2020, 40, 466–489. [Google Scholar] [CrossRef]
  26. Xu, S.; Zhang, J.; Bo, L.L.; Li, H.R.; Zhang, H.; Zhong, Z.M.; Yuan, D.Q. Singular Vector Sparse Reconstruction for Image Compression. Comput. Electr. Eng. 2021, 91, 107069. [Google Scholar] [CrossRef]
  27. Wan, Y.; Meng, X.J.; Wang, Y.F.; Qiang, H.P. Dynamic Time Warping Similarity Measurement Based on Low-rank Sparse Representation. Vis. Comput. 2021, 38, 1731–1740. [Google Scholar] [CrossRef]
  28. Rangel, O.B.; Rosas, R.R. Detection and Classification of Burnt Skin Via Sparse Representation of Signals by Over-redundant Dictionaries. Comput. Biol. 2020, 132, 104310. [Google Scholar] [CrossRef]
  29. Whitaker, B.M.; Carroll, B.T.; Daley, W.; Anderson, D.V. Sparse Decomposition of Audio Spectrograms for Automated Disease Detection in Chickens. In Proceedings of the 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta, GA, USA, 3–5 December 2014. [Google Scholar]
  30. Cheng, B.J. The Abnormal Recognition Method of Ecological Breeding Chicken Voice Based on Sparse Representation. Master’s Thesis, Fuzhou University, Fuzhou, China, 2016. [Google Scholar]
  31. Li, J.L. Research on Weak Fault Feature Extraction Method Based on Adaptive Sparse Signal. Master’s Thesis, Beijing University of Chemical Technology, Beijing, China, 2020. [Google Scholar]
  32. Li, Y.X.; Yin, Z.K.; Wang, J.Y. Fast Algorithm for MP Sparse Decomposition and its Application in Speech Recognition. Comput. Eng. Appl. 2010, 46, 122–124 + 128. [Google Scholar]
  33. Feng, L.Y.; Li, S.T. Face Recognition by Exploiting Local Gabor Features with Multitask Adaptive Sparse Representation. IEEE Trans. Instrum. Meas. 2015, 64, 2605–2615. [Google Scholar] [CrossRef]
  34. Sun, Z.G.; Jiang, A.P.; Gao, M.M.; Zhang, M.; Wang, G.T. Feature Optimization Method for the Localization Technology on Loose Particles Inside Sealed Electronic Equipment. Expert Syst. Appl. 2022, 204, 117569. [Google Scholar] [CrossRef]
  35. AlSagri, H.; Ykhlef, M. Quantifying Feature Importance for Detecting Depression using Random Forest. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 628–635. [Google Scholar] [CrossRef]
  36. Anton, J.C.A.; Nieto, P.J.G.; Viejo, C.B.; Vilan, J.A.V. Support Vector Machines Used to Estimate the Battery State of Charge. IEEE Trans. Power Electron. 2013, 28, 5919–5926. [Google Scholar] [CrossRef]
  37. Kotsiantis, S.B. Decision Trees: A Recent Overview. Artif. Intell. Rev. 2013, 39, 261–283. [Google Scholar] [CrossRef]
  38. Schonlau, M.; Zou, R.Y. The Random Forest Algorithm for Statistical Learning. Stata J. 2020, 20, 3–29. [Google Scholar] [CrossRef]
  39. Li, T.; Li, J.; Liu, Z.L.; Li, P.; Jia, C.F. Differentially Private Naive Bayes Learning Over Multiple Data Sources. Inf. Sci. 2018, 444, 89–104. [Google Scholar] [CrossRef]
  40. Gou, J.P.; Ma, H.X.; Ou, W.H.; Zeng, S.N.; Rao, Y.B.; Yang, H.B. A Generalized Mean Distance-based k-nearest Neighbor Classifier. Expert Syst. Appl. 2018, 115, 356–372. [Google Scholar] [CrossRef]
  41. Wang, L.; Zeng, Y.; Chen, T. Back Propagation Neural Network with Adaptive Differential Evolution Algorithm for Time Series Forecasting. Expert Syst. Appl. 2015, 42, 855–863. [Google Scholar] [CrossRef]
  42. Gu, J.X.; Wang, Z.H.; Kuen, J.; Ma, L.Y.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.X.; Wang, G.; Cai, J.F.; et al. Recent Advances in Convolutional Neural Networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
  43. Sun, Z.G.; Wang, G.T.; Gao, M.M.; Gao, L.Z.; Jiang, A.P. Sealed Electronic Equipment Loose Particle Positioning Technology Based on kNN Algorithm of Parameter Optimization. J. Electron. Meas. Instrum. 2021, 35, 94–104. [Google Scholar]
  44. Le Jeune, L.; Goedeme, T.; Mentens, N. Machine Learning for Misuse-Based Network Intrusion Detection: Overview, Unified Evaluation and Feature Choice Comparison Framework. IEEE Access 2021, 9, 63995–64015. [Google Scholar] [CrossRef]
  45. Kaltenecker, C.; Grebhahn, A.; Siegmund, N.; Apel, S. The Interplay of Sampling and Machine Learning for Software Performance Prediction. IEEE Softw. 2020, 37, 58–66. [Google Scholar] [CrossRef]
  46. Racz, A.; Bajusz, D.; Heberger, K. Multi-Level Comparison of Machine Learning Classifiers and Their Performance Metrics. Molecules 2019, 24, 2811. [Google Scholar] [CrossRef] [Green Version]
  47. Mahmood, Y.; Kama, N.; Azmi, A.; Khan, A.S.; Ali, M. Software Effort Estimation Accuracy Prediction of Machine Learning Techniques: A Systematic Performance Evaluation. Softw.-Pract. Exp. 2021, 52, 39–65. [Google Scholar] [CrossRef]
  48. Luque, A.; Carrasco, A.; Martin, A.; de las Heras, A. The Impact of Class Imbalance in Classification Performance Metrics Based on the Binary Confusion Matrix. Pattern Recognit. 2019, 91, 216–231. [Google Scholar] [CrossRef]
  49. Qadri, S.F.; Shen, L.L.; Ahmad, M.; Qadri, S.; Zareen, S.S.; Khan, S. OP-convNet: A Patch Classification-Based Framework for CT Vertebrae Segmentation. IEEE Access 2021, 9, 158227–158240. [Google Scholar] [CrossRef]
  50. Ahmad, M.; Qadri, S.F.; Ashraf, M.U.; Subhi, K.; Khan, S.; Zareen, S.S.; Qadri, S. Efficient Liver Segmentation from Computed Tomography Images Using Deep Learning. Comput. Intell. Neurosci. 2022, 2022, 2665283. [Google Scholar] [CrossRef]
Figure 1. Research process of broiler sound signal recognition method.
Figure 1. Research process of broiler sound signal recognition method.
Sensors 22 07935 g001
Figure 2. Time domain waveform diagram of four sound types. (a) Crow; (b) Cough; (c) Purr; (d) Flapping wings.
Figure 2. Time domain waveform diagram of four sound types. (a) Crow; (b) Cough; (c) Purr; (d) Flapping wings.
Sensors 22 07935 g002aSensors 22 07935 g002b
Figure 3. Frequency domain waveform diagram of four sound types. (a) Crow; (b) Cough; (c) Purr; (d) Flapping wings.
Figure 3. Frequency domain waveform diagram of four sound types. (a) Crow; (b) Cough; (c) Purr; (d) Flapping wings.
Sensors 22 07935 g003
Figure 4. Block diagram of MFCC feature extraction.
Figure 4. Block diagram of MFCC feature extraction.
Sensors 22 07935 g004
Figure 5. Flow chart of the GA–OMP algorithm.
Figure 5. Flow chart of the GA–OMP algorithm.
Sensors 22 07935 g005
Figure 6. Comparison of sparse reconstruction using different atom numbers of four sound types. (a) Crow; (b) Cough; (c) Purr; (d) Flapping wings.
Figure 6. Comparison of sparse reconstruction using different atom numbers of four sound types. (a) Crow; (b) Cough; (c) Purr; (d) Flapping wings.
Sensors 22 07935 g006
Figure 7. Validation data set to be predicted.
Figure 7. Validation data set to be predicted.
Sensors 22 07935 g007
Table 1. Specific description of the extracted 60-dimensional classification features.
Table 1. Specific description of the extracted 60-dimensional classification features.
NumberFeature NameSymbolic
Representation
NumberFeature NameSymbolic
Representation
1Short-term energyEn319th dimension first-order dynamic featuredtm_i
2Short-term average zero-crossing rateZcr3210th dimension first-order dynamic featuredtm_j
3Short-term autocorrelation functionamdfR3311th dimension first-order dynamic featuredtm_k
4Short-term average amplitude differenceamdfVec3412th dimension first-order dynamic featuredtm_l
5Short-term average amplitudeM3513th dimension first-order dynamic featuredtm_m
6Spectral entropyH361st dimensional second-order dynamic featuredtmm_a
7Spectrum centroidspeCen372nd dimension second-order dynamic featuredtmm_b
8Root mean square frequencyRMSF383rd dimension second-order dynamic featuredtmm_c
9Frequency standard deviationRVF394th dimension second-order dynamic featuredtmm_d
101st dimension static featurem_a405th dimension second-order dynamic featuredtmm_e
112nd dimension static featurem_b416th dimension second-order dynamic featuredtmm_f
123rd dimension static featurem_c427th dimension second-order dynamic featuredtmm_g
134th dimension static featurem_d438th dimension second-order dynamic featuredtmm_h
145th dimension static featurem_e449th dimension second-order dynamic featuredtmm_i
156th dimension static featurem_f4510th dimension second-order dynamic featuredtmm_j
167th dimension static featurem_g4611th dimension second-order dynamic featuredtmm_k
178th dimension static featurem_h4712th dimension second-order dynamic featuredtmm_l
189th dimension static featurem_i4813th dimension second-order dynamic featuredtmm_m
1910th dimension static featurem_j49Stretching factor of 30 atomic matchingscale30
2011th dimension static featurem_k50Translation factor of 30 atom matchingtranslation30
2112th dimension static featurem_l51Frequency factor of 30 atom matchingfreq30
2213th dimension static featurem_m52Phase factor of 30 atom matchingphase30
231st dimensional first-order dynamic featuredtm_a53Stretching factor of 50 atomic matchingscale50
242nd dimension first-order dynamic featuredtm_b54Translation factor of 50 atom matchingtranslation50
253rd dimension first-order dynamic featuredtm_c55Frequency factor of 50 atom matchingfreq50
264th dimension first-order dynamic featuredtm_d56Phase factor of 50 atom matchingphase50
275th dimension first-order dynamic featuredtm_e57Stretching factor of 100 atomic matchingscale100
286th dimension first-order dynamic featuredtm_f58Translation factor of 100 atom matchingtranslation100
297th dimension first-order dynamic featuredtm_g59Frequency factor of 100 atom matchingfreq100
308th dimension first-order dynamic featuredtm_h60Phase factor of 100 atom matchingphase100
Table 2. Specific description of the preliminary data set.
Table 2. Specific description of the preliminary data set.
LabelTotal Number of DataNumber of Training DataNumber of Testing Data
Crow591841431775
Cough448631401346
Purr895562692686
Flapping wing953266722860
Total number28,89120,1248667
Table 3. Classification accuracies achieved by DT classifier on the data set before and after standardization processing.
Table 3. Classification accuracies achieved by DT classifier on the data set before and after standardization processing.
NumberBefore Processing/%After Processing/%
177.9177.78
277.1878.49
378.3678.04
477.4977.91
577.7378.30
676.3478.02
777.2077.68
877.9979.32
977.7877.75
1077.7378.41
Mean value77.5778.17
Table 4. Specific description of the retained thirty sound features.
Table 4. Specific description of the retained thirty sound features.
NumberFeature NameSymbolic
Representation
NumberFeature NameSymbolic
Representation
1Short-term energyEn168th dimension static featurem_h
2Short-term average zero-crossing rateZcr179th dimension static featurem_i
3Short-term average amplitude differenceamdfVec1810th dimension static featurem_j
4Short-term average amplitudeM1911th dimension static featurem_k
5Spectral entropyH2012th dimension static featurem_l
6Spectrum centroidspeCen2113th dimension static featurem_m
7Root mean square frequencyRMSF221st dimensional first-order dynamic featuredtm_a
8Frequency standard deviationRVF233rd dimension first-order dynamic featuredtm_c
91st dimension static featurem_a241st dimensional second-order dynamic featuredtmm_a
102nd dimension static featurem_b252nd dimension second-order dynamic featuredtmm_b
113rd dimension static featurem_c263rd dimension second-order dynamic featuredtmm_c
124th dimension static featurem_d276th dimension second-order dynamic featuredtmm_f
135th dimension static featurem_e28Frequency factor of 30 atom matchingfreq30
146th dimension static featurem_f29Frequency factor of 50 atom matchingfreq50
157th dimension static featurem_g30Frequency factor of 100 atom matchingfreq100
Table 5. Classification accuracies achieved by DT classifier on data sets composed of different number of sound features.
Table 5. Classification accuracies achieved by DT classifier on data sets composed of different number of sound features.
NumberClassification Accuracy/%
SixtyThirty
177.7881.73
278.4982.09
378.0480.89
477.9180.92
578.3081.88
678.0280.16
777.6881.20
879.3280.58
977.7581.78
1078.4181.62
Mean value78.1781.29
Table 6. Classification accuracies achieved by seven classification algorithms.
Table 6. Classification accuracies achieved by seven classification algorithms.
NumberSVM/%Decision Tree/%Random Forest/%Naive Bayes/%kNN/%BP-NN/%CNN/%
186.7881.7390.5273.0492.9164.6067.18
287.2082.0990.1074.6192.9364.4967.22
388.1480.8990.9972.9193.3564.5367.19
487.2580.9290.5073.2792.5964.5267.27
587.7281.8890.7373.6493.6764.6467.27
687.3680.1690.5573.9592.3664.5267.23
786.1381.2090.6373.0492.7564.5667.20
886.3680.5890.5274.5392.8364.5867.25
987.9681.7890.7373.7293.3064.6367.26
1088.0681.6290.4574.7192.6764.5167.24
Mean value87.3081.2990.5773.7492.9464.5667.23
Table 7. Other performance evaluation indexes obtained by five classification algorithms.
Table 7. Other performance evaluation indexes obtained by five classification algorithms.
AlgorithmLabelPrecision/%Recall/%F1 Value/%
SVMCrow78.486.982.5
Cough99.584.291.4
Purr90.690.290.3
Flapping wing88.284.186.3
decision treeCrow73.975.474.6
Cough86.683.384.7
Purr86.285.085.4
Flapping wing79.780.480.2
random forestCrow84.288.086.1
Cough98.484.991.2
Purr92.993.293.0
Flapping wing91.490.390.9
naive bayesCrow69.945.955.4
Cough88.751.865.3
Purr79.585.982.7
Flapping wing67.083.174.1
kNNCrow88.290.289.1
Cough98.291.494.6
Purr95.493.894.6
Flapping wing92.493.993.1
Table 8. Optimal parameter combination of kNN.
Table 8. Optimal parameter combination of kNN.
Parameter NameOptimal Value
n_neighbor4
weightsdistance
metricEuclidean
Table 9. Classification accuracies achieved by the classification model based on kNN before and after parameter optimization.
Table 9. Classification accuracies achieved by the classification model based on kNN before and after parameter optimization.
Number of Testing DataNumber of Data with Correct PredictionsNumber of Data with Incorrect PredictionsAverage Classification Accuracy/%
Ordinary classification model8667805561292.94
Optimal classification model8667816150694.16
Table 10. Confusion matrix of prediction results.
Table 10. Confusion matrix of prediction results.
True ResultsPrediction Results
PositiveNegative
PositiveTPFN
NegativeFPTF
Table 11. Analysis of prediction results of the classification model.
Table 11. Analysis of prediction results of the classification model.
NumberNumber of DataNumbers of Predicted CrowsNumbers of Predicted CoughsNumbers of Predicted PurrsNumbers of Predicted Flapping WingsPredicted TypeTrue Type
14303508coughcough
24139011crowcrow
310505973purrpurr
44013207coughcough
51301402789flapping wingflapping wing
63502906coughcough
77620659purrpurr
83631041crowcrow
Table 12. Recognition results of ten segments of broiler sound signals.
Table 12. Recognition results of ten segments of broiler sound signals.
NumberNumber of Sound TypesNumber of Predicted Correct Sound TypesNumber of Predicted Incorrect Sound Types
111110
215150
311110
4871
510100
612120
711110
812120
913130
1010100
Total number1131121
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tao, W.; Wang, G.; Sun, Z.; Xiao, S.; Wu, Q.; Zhang, M. Recognition Method for Broiler Sound Signals Based on Multi-Domain Sound Features and Classification Model. Sensors 2022, 22, 7935. https://doi.org/10.3390/s22207935

AMA Style

Tao W, Wang G, Sun Z, Xiao S, Wu Q, Zhang M. Recognition Method for Broiler Sound Signals Based on Multi-Domain Sound Features and Classification Model. Sensors. 2022; 22(20):7935. https://doi.org/10.3390/s22207935

Chicago/Turabian Style

Tao, Weige, Guotao Wang, Zhigang Sun, Shuyan Xiao, Quanyu Wu, and Min Zhang. 2022. "Recognition Method for Broiler Sound Signals Based on Multi-Domain Sound Features and Classification Model" Sensors 22, no. 20: 7935. https://doi.org/10.3390/s22207935

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop