Next Article in Journal
Comparative Evaluation of the Effectiveness of Using Quinoa Grain (Chenopodium quinoa Willd.) with High and Low Saponin Content in Broiler Chicken Feeding
Previous Article in Journal
Actical Accelerometers as a Clinical Tool for the Monitoring of Sleeping and Resting Periods in Individual Dogs
Previous Article in Special Issue
The Economic Implications of Phasing Out Pig Tail Docking: A Pilot Study in Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Framework for Classification of Fattening Pig Vocalizations in a Conventional Farm with High Relevance for Practical Application

by
Thies J. Nicolaisen
1,2,*,†,
Katharina E. Bollmann
3,†,
Isabel Hennig-Pauka
1 and
Sarah C. L. Fischer
3,*
1
Field Station for Epidemiology (Bakum), University of Veterinary Medicine Hannover, Foundation, Büscheler Straße 9, 49456 Bakum, Germany
2
Institute for Animal Hygiene, Animal Welfare and Farm Animal Behaviour, University of Veterinary Medicine Hannover, Foundation, Building 116, Bischofsholer Damm 15, 30173 Hannover, Germany
3
Fraunhofer Institute for Nondestructive Testing(IZFP), Campus E3 1, 66123 Saarbrücken, Germany
*
Authors to whom correspondence should be addressed.
The authors contributed equally to this work.
Animals 2025, 15(17), 2572; https://doi.org/10.3390/ani15172572
Submission received: 22 July 2025 / Revised: 17 August 2025 / Accepted: 26 August 2025 / Published: 1 September 2025
(This article belongs to the Special Issue Animal Health and Welfare Assessment of Pigs)

Simple Summary

The aim of this study was to record the sounds made by fattening pigs under conventional housing conditions and to identify the associated behaviors. These sound–behavior combinations were then classified into the categories “positive/neutral”, “negative” and “others” on the basis of expert knowledge. Most pig sounds were positively/neutrally assessed vocalizations constituting 59.7%, of which grunting was by far the most frequent pig sound. Negatively classified pig sounds accounted for 37.8% of all vocalizations. A subsequent mathematical analysis of these sound categories using objective frequency- and time-based parameters was performed to illustrate how feature-based analysis works. Establishing an expertise-based framework for pig classification is important to further progress in the area of acoustic assistance systems for farmers. It forms the technical basis for detecting critical situations relevant to animal welfare based on negatively assessed vocalization troughs, mathematical analysis and machine learning.

Abstract

The vocal repertoire of the domestic pig (Sus scrofa domesticus) was examined in this study under conventional housing conditions. Therefore, direct behavior-associated vocalizations of fattening pigs were recorded and assigned to behavioral categories. Subsequently, a mathematical analysis of the recorded vocalizations was conducted using the frequency-based parameters of 25%, 50% and 75% quantiles of the frequency spectrum and the time-based parameters of variance of the time signal, mean level of the individual amplitude modulation and cumulative amplitude modulation. Most vocalizations were positively/neutrally assessed vocalizations constituting 59.7%, of which grunting was by far the most frequent vocalization. Negatively assessed vocalizations accounted for 37.8% of all vocalizations. Data analysis based on the six parameters resulted in a distinguishability of vocalizations related to negatively valenced behavior from those related to positively/neutrally valenced behavior. The study illustrates the relationship between auditory sensory perception and the underlying mathematical signals. It shows how pig vocalizations assessed by observations, for example, as positive or negative, are distinguishable using mathematical parameters but also which ambiguities arise when objective mathematical features widely overlap. In this way, the study encourages the use of more complex algorithms in the future to solve this challenging, multidimensional problem, forming the basis for future automatic detection of negative pig vocalizations.

1. Introduction

The vocal repertoire of the domestic pig (Sus scrofa domesticus) is diverse and has been the focus of many scientific studies. Several attempts have been made to divide pig vocalizations into different categories. In some of the earlier studies, vocalizations were differentiated into three main categories: grunts, squeals, and squeal–grunts [1] or into short grunts, long grunts and squeals [2]. Early work on wild boars (Sus scrofa) also resulted in discrete vocalization categories [3] and more recently, four different call types—grunts, squeals, squeal-grunts and trumpets—were identified based on spectral and acoustic parameters [4]. Acoustic frequency was also used as a criterion for differentiating vocalizations: calls of suckling piglets could be divided through cluster analysis into either two or five categories [5] or into low-frequency (LH) and high-frequency (HF) calls [6].
The differing results and multiple definitions of so-called intermediate categories, such as “squeal–grunts” [1,4], reflect the difficulty in assigning pig vocalizations to distinct categories. There is evidence that the transition between vocalization categories is more continuous rather than discrete [5], which makes classification more challenging.
Scientific approaches to categorizing pig vocalizations were mostly conducted under experimental conditions, meaning that vocalizations were recorded only in specific strictly defined situations, such as during nursing [7,8,9], castration [10,11,12,13], crushing [14], interaction between the sow and piglets [15,16,17] or stressful situations [18]. Recently, a large data set of over 7400 pig calls, gathered from various scientific studies conducted mainly under experimental conditions, was used to train a neural network, enabling classification of these vocalizations [6].
So far, scientific studies analyzing the vocal repertoire of pigs under practical conditions in conventional pig husbandry systems are scarce [19,20].
Hence, the aim of our study was to record the vocal repertoire and associated behavior of fattening pigs housed in a conventional system, and to analyze the recordings in terms of their acoustic characteristics. In contrast to many recent experimental studies, this study is based on spontaneous vocalizations of fattening pigs under conventional housing conditions. It focusses on an expert-centered, physically based approach, where vocalizations were categorized based on expert knowledge and described by a selection of mathematical features. The study was based on recordings of pig vocalizations and associated behavior which were subsequently grouped into behavioral categories by means of direct observation. Subsequently, the recordings were processed and features of the vocalization of the different behaviors were analyzed targeting at their distinctiveness based on acoustic parameters. This approach forms the basis for future machine learning for further development of assistance systems for pig husbandry.

2. Materials and Methods

2.1. Animals and Housing

Data collection was conducted on a conventional fattening farm in northwest Germany (330 fattening places) between December 2022 and August 2023. The average size of the fattening pens was 9.8 m2 (3.4 m × 2.9 m), reduced to 9.5 m2 after deducting the area of the automatic feeders. The fattening pigs were crossbreds of Danish Landrace x Yorkshire and Duroc. The pigs included in this study were in the fattening period (11th to 23rd week of life, approximately 30–110 kg body weight). The pens were stocked with mixed sexes, containing equal proportions of castrated males and females. The pigs were tail-docked as suckling piglets. Eleven pigs were kept in each pen resulting in an area 0.86 m2 per pig. The pigs in each pen had access to an automatic feeding station with two feeding places offering feed ad libitum and two nipple drinkers. The concrete floor was partly slatted and partly solid. Enrichment material was offered in the form of alfalfa pellets via a self-service machine.

2.2. Behavioral Data Collection

Data were collected on six observation days over a total period of 160 min (n = 1) or 180 min (n = 5) between 09:00 and 14:00 by means of direct observation in two batches. The average number of days pigs were kept in the fattening unit before data collection was 33.8 days (minimum: 2 days; maximum: 87 days). The behavior of the pigs accompanied by vocalizations was recorded through direct observation and with second-level precision, while vocalizations were simultaneously recorded. A microphone was placed slightly above animal height between two of the observed pens. Data were collected by one experienced person (long-standing experience in ethological research on pigs) seated at an elevated position with a clear view of the four pens being observed simultaneously. Data collection began 15 min after the observer had taken up his position to minimize the observer’s influence on the pigs’ natural behavior. Non-vocal pig sounds categorized as “others” (e.g., coughing, sneezing), which did not represent true vocalizations, were also recorded from adjacent pens due to their distinctive acoustic characteristics and to increase the sample size for subsequent mathematical analysis.

2.3. Recording of Audio Data

The pigs’ vocalizations were recorded with a sampling frequency of 48 kHz using an omnidirectional microphone (Behringer ECM 8000, Co. Behringer, Penang, Malaysia), an audio interface (Behringer UMC202HD, Co. Behringer, Penang, Malaysia) and a computer with Audacity®, Version 3.2. Due to the characteristics of the microphone, a band-pass filter (20 Hz to 20 kHz, Butterworth, order 4) was applied as part of preprocessing. The challenges associated with recording vocalizations in this barn and annotating the sounds have been described previously [21].
Mathematical analysis of the single pig behavior-related vocalizations annotated in the barn requires precise identification of the vocalizations in the audio signal. Based on the times recorded by direct observation, individual pig vocalizations were manually identified in the continuous audio signal. The reaction times of the observer in the stable were considered by searching within ±2 s of the recorded start time of the vocalization. The start and end points of single vocalizations were determined by listening to the audio signal and visually inspecting the graphical representation of the amplitude.

2.4. Acoustic Data Processing

Acoustic signals were characterized using selected features of the time signal and frequency spectrum of the vocalizations (Figure 1).
Features are used to describe acoustic data using fewer data points than specified by the sampling frequency. Features can be determined both in the time and frequency domain.
The literature contains a wide range of features that are applied to animal acoustic signals [4,13,22,23]. For this paper, six features were selected as examples to illustrate the relationship between auditory sensory perception and the underlying mathematical signals. Based on the acoustic impressions when listening to pig vocalizations, features from the frequency range, which allow conclusions to be drawn about the sound pitches, as well as features from the time range, which reflect the volume progression over the vocalization period, were selected and are described in more detail below.

2.4.1. Frequency-Based Features—Quantiles of the Frequency Spectrum

Three features from the frequency domain were selected: the first (Q1), second (Q2), and third (Q3) quartile of the cumulative intensity, indicating the frequency at which 25%, 50%, and 75% of the cumulative intensity of the spectrum are reached (Figure 2). The second quartile corresponds to the center of the spectrum. All spectral intensities are summed to obtain the cumulative one. Both the frequency values and the distances between the frequency quartiles are characteristic of sounds.

2.4.2. Time-Based Features—Measures of the Amplitude Changes

A time signal contains many local maxima and minima (Figure 3). The time signal with a linear scale was used to determine the parameters variance of the time signal (Var) and cumulative amplitude modulation (∑Ai). Feature ∑Ai indicates the cumulative sum of all height differences of the individual peaks, normalized by the sound duration. Feature ∑Ai reflects the loudness within the acoustic signal. The variance of the time signal (Var) statistically describes the spreading of the values with respect to the average value. Var reflects the variation of loudness of the acoustic signal.
In contrast, the logarithmic amplitude scale (dB-scale) was employed to characterize the mean level of the individual amplitude modulations (Ā). To determine feature Ā as a measure of the average loudness of the acoustic signal, the mean value of all single peak-to-peak dimensions Ai of the logarithmic time signal was determined. The feature Ā reflects the level of loudness of the acoustic signal.

2.4.3. Mathematical Analysis of the Results

The feature ∑Ai is used as an example to explain the graphical representation of the results (Figure 4).

3. Results

3.1. Classification of Pig Sounds

The recorded combinations of behavior and vocalization were classified into the following categories and subcategories. First, the pig sounds were divided into “vocalizations” and “others”. “Others” included pig sounds that were not produced by the vocal tract of the pigs (e.g., “coughing”, “sneezing”, “ear shaking”). Combinations classified as “positive/neutral” included “grunting” and “playing behavior”. Combinations classified as “negative” included agonistic behavior in form of “conflict over resources” (e.g., food), “fighting”, “oral manipulation” and “physical contact”. A separate category was created for the vocalization “alert”. An overview of the categories and definition of the pig behaviour-associated sounds is given in Table 1.

3.2. Results of Behavioral Observations

In total, 1705 behavior–vocalization combinations were recorded in the observed pens. “Grunting” was by far the most frequent vocalization (59.7% [n = 1018]), followed by “aversive physical contact” (14.1% [n = 240]), “conflict over resources” (9.9% [n = 169]), “alert” (5.5% [n = 93]), “oral manipulation” [4.3% [n = 73]), “fighting” (4.0% [n = 68]) and “playing behavior” (2.6% [n = 44]).

3.3. Analysis of Acoustic Data Set Regarding Behavioral Framework

A sub-sample of the acoustic recordings of the behavior–vocalization combinations was analyzed using mathematical–physical features. A discrepancy between the total number of observations and the number of observations included in the final analysis was due to overlapping vocalization events or the rapid succession of multiple vocalizations, for which an unambiguous identification of specific vocalizations in the audio signal was not possible.
Longer acoustic sequences, such as interactions between two pigs, were divided into multiple acoustic signals. This resulted in a total of 1167 observed vocalizations (Figure 5), of which 33% were positive/neutral vocalizations [n = 380], 26% negative vocalizations [n = 296], 4% alerting vocalizations [n = 51] and 37% other vocalizations [n = 427].
Within the positive/neutral vocalization [n = 352], 92% were grunting vocalizations [n = 28] and 8% playing vocalizations [n = 28]. The negative vocalizations [n = 296] were divided into 25% resource conflict [n = 74], 18% fight [n = 54], 15% oral manipulation [n = 45] and 42% physical contact [n = 123]. The other vocalizations [n = 427] include 19% coughing [n = 81], 75% sneezing [n = 319], 6% ear shaking [n = 27] and 3% snoring [n = 13]. The alert vocalizations were not further divided into subcategories.

3.4. Analysis of Acoustic Data Set Regarding Acoustic Features

Based on the mathematical analysis presented, selected audio signals were analyzed using six selected features. To compare the features across different behavioral categories, the resulting data were averaged within groups defined by the presented vocalization classification framework in Table 1.
Table 2 provides a summary of all data that will be presented graphically in the following subsections for reference.

3.4.1. All Vocalizations

The distributions of the selected features of all recorded vocalizations (1167 samples) resulted in positive skewness of features ∑Ai, Var, Q1, Q2 and Q3, while feature Ā showed no skewness (Figure 6). The medians of ∑Ai, Ā and Var were 4.00 × 10−1, 24.42 dB and 11.30 × 10−6, respectively. The medians of Q1, Q2 and Q3 were 97 Hz, 538 Hz and 1387 Hz, respectively, which corresponded to a substantial increase from frequency quartile to frequency quartile.

3.4.2. “Negative” and “Positive/Neutral” Vocalizations

Vocalizations were allocated to the acoustic categories “negative” (296 samples) and “positive/neutral” (352 samples) (Figure 7) based on the predefined behavior–vocalization combination (Table 1).
For all six features, the medians of the positive/neutral vocalizations were lower than those of the negative vocalizations. Within both classes, the features Ā and Q2 showed no skewness while the other four features showed positive skewness. The interquartile ranges of the two classes overlapped slightly for ∑Ai, Ā, Q2 and Q3 and strongly for features Var and Q1.

3.4.3. “Oral Manipulation” and “Aversive Physical Contact”

Characterization of behavior–vocalization combinations affecting animal welfare was of major interest in this study. Therefore, the features of the subcategories within the “negative” class “oral manipulation” (45 sounds) and “aversive physical contact” (122 sounds) were characterized (Figure 8).
The statistical position parameters of the distributions of the two classes “oral manipulation” and “aversive physical contact” were similar for all features and also similar to the overall features of negative class vocalizations.
For the class “oral manipulation”, the distributions of ∑Ai, Var, Q2 and Q3 showed positive skewness, while Ā and Q1 showed no skewness.
The distributions of ∑Ai, Var, Q1 and Q3 had positive skewness and that of Q2 had negative skewness. The distribution of Ā had no skewness.
The interquartile ranges of all plotted features for both classes overlapped considerably.

3.4.4. “Alert” (Level 4)

From an ethological point of view, the behavior–vocalization combination “alert” (n = 51) was neither assigned to the “positive/neutral” category nor to the “negative” category. Hence, it is presented separately in comparison to the two aforementioned categories. On comparing features of “alert” sounds with those of sounds allocated to the positive or negative class, the medians of the negative class were higher than those of the positive/neutral class for all features (Figure 9)
Medians of the “alert sounds” features ∑Ai,Var and Q1 clearly shifted towards larger values (Figure 10). The distributions of Var, Ā and Q1 of “alert sounds” showed no skewness, while distributions of the features ∑Ai, Q2 and Q3 showed positive skewness.
In the following section, the feature Q3 will be discussed in more detail isolated from the other features. For this feature, q1 and the median (704 Hz and 789 Hz) were similar, while q3 was 1207 Hz. Thereby, the feature Q3 of alerting sounds exhibited positive skewness similar to Q3 for positive/neutral vocalizations (q1 = 348 Hz, median = 399 Hz and q3 = 786 Hz). In contrast, Q3 of negative vocalizations not only exhibited overall higher frequencies (q1 = 1049 Hz, median = 1804 Hz and q3 = 3449 Hz), but also a less pronounced positive skewness and a considerable interquartile range between q1 and the median.
The interquartile ranges of Var differed considerably between the three classes. The interquartile range of Var of the negative class was more than four times larger than that of the positive/neutral class. Var of the “alert” class was more than three times higher than that of the negative class.

3.4.5. “Sneezing” and “Coughing” (Level 4)

As reflexive sounds, the distributions of the features of the classes “sneezing” (295 sounds) and “coughing” (76 sounds) were compared to the distribution of the sum of all other vocalizations (Figure 10).
For all features, the medians for “sneezing” were larger than those for “all vocalizations”, with Q1 at 500 Hz, Q2 at 1968 Hz and Q3 at 4353 Hz. The distributions of ∑Ai, Var and Q1 showed positive skewness, Ā and Q2 no skewness and Q3 negative skewness.
For “coughing” the medians of the features ∑Ai, Var and Q1 were smaller, while Ā was similar and Q2 and Q3 were larger than those of “pig vocalizations”. Feature Q3 was the only feature without skewness. All other classes showed positive skewness.

4. Discussion

The aim of this study was (i) to describe and record the vocal repertoire and associated behavior that fattening pigs show in a conventional housing system, (ii) to categorize the vocalizations into positive and negative sounds related to welfare based on the exhibited behavior and (iii) to characterize the recorded vocalizations based on acoustic parameters. Vocal-associated behavior of pigs was recorded under practical conditions in a conventional housing system for fattening pigs because the results of the analysis form the basis for an automated real-time warning system for negative vocalizations. The categorization and interpretation of vocalizations on pig farms is a promising approach to identify behaviors that negatively impact pig welfare. This forms the basis for enabling early intervention and preventing pain and discomfort. In this study, direct observation of pig behavior was chosen as an approved ethological sampling method, allowing immediate assignment of specific vocalizations to their associated behavior.
In this study, grunts accounted for most of the recorded vocalizations, followed by vocalizations that occurred in situations considered negative for the vocalizing pig. These two categories also formed most recorded vocalizations in a study in wild boars [4]. Situations assessed as negative for the vocalizing pig included conflict between pigs at the automatic feeder, oral manipulations of a pig by a conspecific or aversive physical contacts between a lying pig with a conspecific. This is in accordance with a previous study, where screams and squeals were reported during negative situations, whereas grunts were more common in positive situations [24]. An association between a higher number of high-frequency calls and negative situations was also found in prior research [25]. The valence “positive/neutral” or “negative” was assigned by the human observer based on the observed combination of vocalization and behavior. Negative vocalizations are part of the normal vocal repertoire of wild boars under natural conditions [4]. Therefore, it is difficult to determine at which threshold negative vocalizations are no longer a part of normal behavior, but rather an indicator of stress and potentially welfare-threatening situations. This represents a limitation of our study. Future research could, for example, attempt to establish a relationship between the type and frequency of negative vocalizations and blood concentrations of glucocorticoids as an indicator of stress.
During the direct observations, it was noticeable that the same behavior could be associated with different sounding vocalizations. The mounting of a lying pig by a pen-mate could result either in a short and energetic vocalization of the affected pig (e.g., in cases where the behavior was not tolerated) or in a prolonged, low-frequency vocalization (e.g., when the mounting behavior was not immediately terminated by the lying pig). These observations were confirmed by the acoustic analyses of all negative vocalizations, which indicated a higher variance in features compared to positive vocalizations. This variance makes it very difficult to further distinguish the individual negative vocalizations based on their acoustic features. A specific disturbing action of a pig targeting a pen mate might have led to varying degrees of discomfort depending on the character of the affected pig and the painfulness of action. It is known that pig vocalizations change depending on the degree of arousal [26].
In this study, as the first step, pig sounds were divided into the categories “vocalizations” and “others”. While the category “vocalizations” included vocalizations that were most often accompanied by a specific behavior of the respective pig or a conspecific, and were thus a reaction to this, the “others” category included sounds that were either based on an intrinsic stimulus (e.g., “cough” or “ear shaking”) or reflex (“sneeze”) and were therefore not vocalizations. Subsequently, the true vocalizations were divided into the sub-categories “negative” and “positive/neutral” according to the observed accompanying behavior. This differentiation will serve as the basis for development of an automated system that allows the identification of situations considered hazardous in terms of animal welfare.
The results of the comparison of the acoustic features between the categories “negative” and “positive/neutral” show that these two main categories can be separated from each other by the selected mathematical features. Both the mean values of the frequency quartiles and the analyzed amplitude parameters were clearly distinguishable between “negative” and “neutral/positive” vocalizations. In contrast to that, the quartiles of the distributions of the respective classes overlapped for the other four remaining features. In our study, the category “positive/neutral” included grunts (i.e., contact calls) and barks that were associated with play behavior. These were characterized in the acoustic evaluation by lower feature values compared to “negative” behavior. The frequency quartiles of the amplitudes also indicated that these sounds were mostly in low-frequency ranges. On the other hand, “negative” vocalizations were mainly characterized by high frequency ranges and significantly higher dispersion. The values for the description of the amplitudes also showed higher values and higher variation compared to the “positive/neutral” vocalizations. “Negative” vocalizations often occurred during conflicts between two pigs. Pain or discomfort can therefore often be regarded as trigger for these vocalizations. The result was the arousal of the affected pig and a strong motivation to avoid this situation. This arousal can be regarded as the reason for the high frequency of “negative” vocalizations.
The differentiation of more complex behaviors based on their acoustic features (e.g., resource conflict at the trough or manipulation of a conspecific) was not possible in our study. This can be clearly seen in the comparison of the behaviors “manipulation by a conspecific” and “aversive physical contact”. The results of the mathematical analyses showed that the statistical parameter distribution of the acoustic features was similar. Therefore, it was not possible to subdivide the underlying behavior of these “negative” vocalizations by acoustic features. This suggests a rather continuous transition between different negative vocalizations instead of discrete transitions. This is supported by prior research in which a cluster analysis of piglet vocalizations showed no clear distinction, but more a blurred transition between different call categories [5]. The authors concluded that the vocal repertoire of pigs can be considered more continuous than discrete, which makes classification more challenging.
In contrast to our study, a previous study was able to differentiate between sounds with negative valence [6]. However, in that study, pigs were kept under experimental conditions and artificially exposed to different situations associated with strong emotions leading to distinct expressions [6]. Hence, these results cannot be compared to results in our study in which most frequently transitional behavioral events between a comfortable and an uncomfortable situation were recorded. Automatic detection of pig screams during feeding with the help of an artificial neural network has been realized in the past [19,27]. So far, no translational project has followed the scientific reports, although initial results were promising. In a more recent study, tail biting events were detected successfully with the help of acoustic parameters [20]. The automatic detection focused on the screams of piglets during the rearing period.
The comparison of acoustic features of “sneezing” and “coughing” resulted in a higher variance within the sound “sneezing” compared to the sound “coughing”. Both the frequency quartiles and the amplitude parameters showed higher mean values for “sneezing” behavior, as well as higher variance compared to “coughing”. The result indicated that “sneezing” was higher in frequency than “coughing”, with a higher degree of dispersion in the frequency distributions of the amplitudes and the amplitude parameters themselves. This higher degree of dispersion was surprising, as the “sneezing” behavior is based on a physiological reflex, i.e., this sound in pigs is relatively free of conscious influence. This would suggest a certain reproducibility of this pig sound, which was obviously not the case in our study. One explanation is that “sneezing” can be compared acoustically to an acute onset of white noise. White noise is characterized by a wide frequency spectrum (from low to high frequencies). This leads to an upward shift in the mean value compared to the other sounds and to a high variance of the sound “sneezing”.
Coughing occurs in a wide variety of forms (e.g., “dry” or “wet” coughing or as a “roaring cough”). However, our results suggested a larger homogeneity within the coughing sound compared to “sneezing”. This and the fact that coughing is a sign of respiratory disease makes it a suitable parameter for automatic detection systems, which are already commercially available, but still under development [28,29,30]. A potential discrimination between coughing due to infection and laboratory-induced coughing based on acoustic analysis was published, but has no practical impact in pig production [31]
In contrast to coughing, barking must be interpreted in the context of behavior because pigs bark during “playing behavior” but use it also as an alarm call [1], which was assigned to the category “alert” in our study. While playing behavior was classified as “positive/neutral”, “alert” was classified as neither “negative” nor “positive/neutral” in our study, so that the categorization of “alert” is debatable. Alertness in pigs does not occur in a positive context but is not preceded by a clear negative context as is, for example, agonistic behavior. A high incidence of “alert” vocalizations should result in an inspection of the housing conditions because frequent alertness of pigs is certainly associated with a physiological stress reaction. Therefore, frequent alertness can lead to short-term and also chronic stress in pigs with negative consequences for pig welfare and health. Barking also conveys information about the emitter of this vocalization. Juvenile pigs showed a higher responsiveness to alarm calls of sows compared to alarm calls of other juvenile pigs [32]. This could either indicate that the pigs recognized individuals or that the sound of barking differs between juvenile and adult pigs. The vocal tract, and therefore the vocalizations of pigs, change with increasing body size and as a result pigs may be able to draw conclusions about the size of a vocalizing pig [32]. Our study focused on fattening pigs; therefore, the full vocal repertoire of pigs in different age groups was not represented by our work. Also, other studies focused on vocalization of pigs of specific age-groups, such as sows during nursing [8] and suckling piglets [7,9,14], so that data should be combined and data from missing age-groups should be recorded and evaluated to complete a porcine sound atlas.
The observations in this study suggest that the housing system may have a significant impact on the vocalizations observed. The proportion of negative vocalizations (e.g., caused by brief conflicts over resources with pen mates or vocalizations after manipulation or harassment by a conspecific) observed in this study might be influenced by the high stocking density in this conventional pig farm. Prior research showed that pigs showed fewer bite marks on the body [33], and tail injuries [34] as stocking density decreased, which are considered to result from biting by other pigs—and thus from agonistic behavior. Since agonistic behavior (e.g., fighting or resource conflicts at the trough) was often associated with vocalizations in our study, it can be assumed that lower stocking density would have resulted in less agonistic behavior and consequently fewer vocalizations in our study as well. Another factor mentioned in the previously cited study [33] is group size; in larger groups, there were fewer injuries caused by biting compared to smaller groups. It is therefore conceivable that, if our study had been conducted in a large pen, a lower proportion of vocalizations attributable to agonistic behavior would have been recorded. Under natural or semi-natural housing conditions, fewer conflicts are also to be expected compared to conventional housing systems, which would presumably lead to a lower proportion of negative vocalizations. The type and quality of bedding or enrichment material could also influence the vocalizations observed. It is known that access to rooting material (e.g., wood chips) or high amounts of straw lead to a reduction in oral manipulation of pen mates [35,36]; therefore, negative vocalizations might be less frequent under these conditions. In our study farm no straw, but only alfalfa pellets were offered in a low-stimulus environment, so that the proportion of negative vocalizations might be due to a higher degree of manipulation of conspecifics as expected in systems with provision of straw or straw bedding.
In this study, the feeding system consisted of automatic feeders and feed was provided ad libitum. This repeatedly led to brief conflicts between two pigs over the food resource, which was often accompanied by vocalization. It is known that pigs fed ad libitum with vertical feeders show more injuries indicative of agonistic behavior compared to pigs fed with a liquid feeding system [33]. Furthermore, it is known that restricted access to feed is a common reason for an increase in tail biting [37,38]. Therefore, it can be assumed that agonistic behavior and associated vocalizations may occur less frequently in systems with feeding strategies like liquid feeding or several feeding stations per pen that reduce feed competition. Additionally, pen structure and flooring can have an influence on vocalizations: pen fouling (elimination area on solid floor and lying area on smaller part of slatted floor) was evident on one observation day in our study. Consequently, pigs stepped on each other relatively often, which increased the negative vocalizations on the respective sampling day. It is known that increasing ambient temperatures [39,40] and increasing body weights [40] can lead subsequently to pen fouling. Therefore, an indirect relationship between housing conditions and the occurrence of vocalizations is also possible in this context. In addition, the mounting behavior of pigs motivated to reach enrichment objects (e.g., metal chains) or during playing led to increased negative vocalization of mounted pigs in our study. A limitation of our study is that it was conducted in only one housing system for fattening pigs. Since an influence of the housing system on the vocalizations observed is likely, it is necessary to repeat these investigations in additional housing systems for fattening pigs in order to confirm our results.
This work could contribute to the development of an automatic warning system for potentially welfare-threatening situations based on mathematically analyzed and characterized pig vocalizations using objective acoustic parameters that were recorded under practical conditions. The next step would be to apply artificial intelligence to verify whether the observed separation of the categories “positive/neutral” and “negative” vocalizations can also be achieved using, for example, machine learning techniques or a neural network. Subsequently, an attempt could be made to implement a real-time monitoring system in a pig barn. A particular challenge in this process will be filtering out background noises from the pigs’ environment that could impair the analysis of the vocalizations (e.g., operating feed chain). Artificial intelligence (neural network) has already been successfully applied to pig vocalizations under experimental conditions [6] and there have even been promising attempts to it in practical settings to detect stress vocalizations in pigs [19].

5. Conclusions

An objective differentiation between “positive” and “negative” pig vocalizations under practical conditions is possible based on of mathematical–physical parameters. The mathematical–physical characterization of “positive” and “negative” vocalizations and their distinguishability form the framework for a future development of an automated acoustic detection systems for situations impacting pig welfare. The separation of single behaviors based on associated vocalizations was not possible due to the high similarity of the emitted vocalizations.

Author Contributions

Conceptualization, S.C.L.F. and I.H.-P.; methodology, T.J.N. and S.C.L.F.; software, K.E.B. and S.C.L.F.; validation, T.J.N., K.E.B., I.H.-P. and S.C.L.F.; formal analysis, K.E.B.; investigation, T.J.N. and K.E.B.; resources, I.H.-P. and S.C.L.F.; data curation, T.J.N. and K.E.B.; writing—original draft preparation, T.J.N. (introduction, material and methods, results, discussion) and K.E.B. (material and methods, results).; writing—review and editing, I.H.-P. and S.C.L.F.; visualization, T.J.N. and K.E.B.; supervision, I.H.-P. and S.C.L.F.; project administration, I.H.-P.; funding acquisition, I.H.-P. and S.C.L.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Federal Ministry of Food and Agriculture (BMEL) based on a decision of the Parliament of the Federal Republic of Germany, granted by the Federal Office for Agriculture and Food (BLE; grant number 28N-6-029-01 (I.H.-P.) and 28N-6-029-04 (S.C.L.F.)). We acknowledge financial support provided by the Open Access Publication Fund of the University of Veterinary Medicine Hannover Foundation.

Institutional Review Board Statement

The animal study protocol was approved by the Research Ethics Committee of the University of Veterinary Medicine Hannover, Foundation (protocol code TiHo_EA_26_05-25 and date of approval: 20 July 2025) for studies involving animals.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data of the study are available from the corresponding authors upon reasonable request.

Acknowledgments

The Verbund Transformationsforschung agrar Niedersachsen (trafo:agrar) is acknowledged for the coordination of the project “SmartPigHome” in which this study was conducted.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
q1First quartile (25%) of the distribution
q2Second quartile (50%) of the distribution
q3Third quartile (75%) of the distribution
VarVariance of the time signal
dBDecibel
ĀMean level of the individual amplitude modulations
∑AiCumulative amplitude modulation
Q1First quartile (25%) of the cumulative frequency signal
Q2Second quartile (50%) of the cumulative frequency signal
Q3Third quartile (75%) of the cumulative frequency signal

References

  1. Kiley, M. The Vocalizations of Ungulates, their Causation and Function. Z. Tierpsychol. 1972, 31, 171–222. [Google Scholar] [CrossRef]
  2. Fraser, D. The vocalizations and other behaviour of growing pigs in an “open field” test. Appl. Anim. Ethol. 1974, 1, 3–16. [Google Scholar] [CrossRef]
  3. Klingholz, F.; Meynhardt, H. Lautinventare der Säugetiere—Diskret oder kontinuierlich? Z. Tierpsychol. 1979, 50, 250–264. [Google Scholar]
  4. Garcia, M.; Gingras, B.; Bowling, D.L.; Herbst, C.T.; Boeckle, M.; Locatelli, Y.; Tecumseh Fitch, W. Structural Classification of Wild Boar (Sus scrofa) Vocalizations. Ethology 2015, 122, 329–342. [Google Scholar] [CrossRef]
  5. Tallet, C.; Linhart, P.; Policht, R.; Hammerschmidt, K.; Simecek, P.; Kratinova, P.; Spinka, M. Encoding of Situations in the Vocal Repertoire of Piglets (Sus scrofa): A Comparison of Discrete and Graded Classifications. PLoS ONE 2013, 8, e71841. [Google Scholar] [CrossRef] [PubMed]
  6. Briefer, E.F.; Sypherd, C.C.R.; Linhart, P.; Leliveld, L.M.C.; Padilla de la Torre, M.; Read, E.R.; Guérin, C.; Deiss, V.; Monestier, C.; Rasmussen, J.H.; et al. Classification of pig calls produces from birth to slaughter according to their emotional valence and context of production. Sci. Rep. 2022, 12, 3409. [Google Scholar] [CrossRef]
  7. Jensen, P.; Algers, B. An ethogram of piglet vocalizations during suckling. Appl. Anim. Ethol. 1984, 11, 237–248. [Google Scholar] [CrossRef]
  8. Algers, B. Nursing in pigs: Communicating needs and distributing resources. J. Anim. Sci. 1993, 71, 2826–2831. [Google Scholar] [CrossRef]
  9. Appleby, M.C.; Weary, D.M.; Taylor, A.A.; Illmann, G. Vocal Communication in Pigs: Who are Nursing Piglets Screaming at? Ethology 1999, 105, 881–892. [Google Scholar] [CrossRef]
  10. Weary, D.M.; Braithwaite, L.A.; Fraser, D. Vocal response to pain in piglets. Appl. Anim. Behav. Sci. 1998, 56, 161–172. [Google Scholar] [CrossRef]
  11. Marx, G.; Horn, T.; Thielebein, J.; Knubel, B.; von Borell, E. Analysis of pain-related vocalization in young pigs. J. Sound. Vib. 2003, 266, 687–698. [Google Scholar] [CrossRef]
  12. Puppe, B.; Schön, P.C.; Tuchscherer, A.; Manteuffel, G. Castration-induced vocalisation in domestic piglets, Sus scrofa: Complex and specific alterations of the vocal quality. Appl. Anim. Behav. Sci. 2005, 95, 67–78. [Google Scholar] [CrossRef]
  13. Von Borell, E.; Bünger, B.; Schmidt, T.; Horn, T. Vocal-type classification as a tool to identify stress in piglets under on-farm conditions. Anim. Welf. 2009, 18, 407–416. [Google Scholar] [CrossRef]
  14. Illmann, G.; Hammerschmidt, K.; Spinka, M.; Tallet, C. Calling by Domestic Piglets during Simulated Crushing and Isolation: A Signal of Need? PLoS ONE 2012, 8, e83529. [Google Scholar] [CrossRef] [PubMed]
  15. Weary, D.M.; Lawson, G.L.; Thompson, B.K. Sows shows stronger responses to isolation calls of piglets associated with greater levels of piglet need. Anim. Behav. 1996, 52, 1247–1253. [Google Scholar] [CrossRef]
  16. Illmann, G.; Schrader, L.; Spinka, M.; Sustr, P. Acoustical mother-offspring recognition in pigs (Sus scrofa domestica). Behaviour 2002, 139, 487–505. [Google Scholar] [CrossRef]
  17. Illmann, G.; Neuhauserova, K.; Pokorna, Z.; Chaloupkova, H.; Simeckova, M. Maternal responsiveness of sows towards piglet’s screams during the first 24h postpartum. Appl. Anim. Behav. Sci. 2008, 112, 248–259. [Google Scholar] [CrossRef]
  18. Marchant, J.N.; Whittaker, X.; Broom, D.M. Vocalisations of the adult female domestic pig during a standard human approach test and their relationships with behavioural and heart rate measures. Appl. Anim. Behav. Sci. 2001, 72, 23–39. [Google Scholar] [CrossRef]
  19. Manteuffel, G.; Schön, P.C. STREMODO, an innovative technique for continuous stress assessment of pigs in housing and transport. Arch. Tierzucht. 2004, 2, 173–181. [Google Scholar]
  20. Heseker, P.; Bergmann, T.; Scheumann, M.; Traulsen, I.; Kemper, N.; Probst, J. Detecting tail biters by monitoring pig screams in weaning pigs. Sci. Rep. 2024, 14, 4523. [Google Scholar] [CrossRef]
  21. Bollmann, K.E.; Nicolaisen, T.J.; Ganster, M.; Herter, S.; Hennig-Pauka, I.; Fischer, S.C.L. Untersuchung des Einsatzes von akustischer Überwachung in einem Assistenzsystem für die Schweinehaltung. In Fortschritte der Akustik—DAGA 2024, Proceedings of Jahrestagung für Akustik—DAGA 2024, Hannover, Germany, 18–21 March 2024; Deutsche Gesellschaft für Akustik: Berlin, Germany, 2024; pp. 953–956. [Google Scholar]
  22. Chan, W.Y. The Meaning of Barks: Vocal Communication of Fearful and Playful Affective States in Pigs. Ph.D. Thesis, Washington State University, Pullman, WA, USA, December 2011. [Google Scholar]
  23. Charlton, B.D.; Zhihe, Z.; Snyder, R.J. Vocal cues to identity and relatedness in giant pandas (Ailuropoda melanoleuca). J. Acoust. Soc. Am. 2009, 126, 2721–2732. [Google Scholar] [CrossRef]
  24. Maigrot, A.L.; Hillmann, E.; Briefer, E.F. Encoding of Emotional Valence in Wild Boar (Sus scrofa) Calls. Animals 2018, 8, 85. [Google Scholar] [CrossRef] [PubMed]
  25. Imfeld-Müller, S.; Van Wezemael, L.; Stauffacher, M.; Gygax, L.; Hillmann, E. Do pigs distinguish between situations of different emotional valences during anticipation? Appl. Anim. Behav. Sci. 2011, 131, 86–93. [Google Scholar] [CrossRef]
  26. Linhart, P.; Ratcliffe, V.F.; Reby, D.; Spinka, M. Expression of Emotional Arousal in Two Different Piglet Call Types. PLoS ONE 2015, 10, e0135414. [Google Scholar] [CrossRef] [PubMed]
  27. Schön, P.C.; Puppe, B.; Manteuffel, G. Automated recording of stress vocalisations as a tool to document impaired welfare in pigs. Anim. Welf. 2004, 13, 105–110. [Google Scholar] [CrossRef]
  28. Guarino, M.; Jans, P.; Costa, A.; Aerts, J.M.; Berckmans, D. Field test of algorithm for automatic cough detection in pig houses. Comput. Electron. Agric. 2008, 62, 22–28. [Google Scholar] [CrossRef]
  29. Shen, W.; Tu, D.; Yin, Y.; Bao, J. A new fusion feature based on convolutional neural network for pig cough recognition in field situations. Inform. Proc. Agric. 2021, 8, 573–580. [Google Scholar] [CrossRef]
  30. Shen, W.; Ji, N.; Yin, Y.; Dai, B.; Tu, D.; Sun, B.; Hou, H.; Kou, S.; Zhao, Y. Fusion of acoustic and deep features for pig cough sound detection. Comput. Electron. Agric. 2022, 197, 106994. [Google Scholar] [CrossRef]
  31. Ferrari, S.; Silva, M.; Guarino, M.; Aerts, J.M.; Berckmans, D. Cough sound analysis to identify respiratory infection in pigs. Comput. Electron. Agric. 2008, 64, 318–325. [Google Scholar] [CrossRef]
  32. Chan, W.; Cloutier, S.; Newberry, R.C. Barking pigs: Differences in acoustic morphology predict juvenile responses to alarm calls. Anim. Behav. 2011, 82, 767–774. [Google Scholar] [CrossRef]
  33. Andersen, I.L.; Ocepek, M.; Thingnes, S.L.; Newberry, R.C. Welfare and performance of finishing pigs on commercial farms: Associations with group size, floor space per pig and feed type. Appl. Anim. Behav. Sci. 2023, 105979. [Google Scholar] [CrossRef]
  34. Laskoski, F.; Faccin, J.E.G.; Vier, C.M.; Goncalves, M.A.D.; Orlando, U.A.D.; Kummer, R.; Mellagi, A.P.G.; Bernardi, M.L.; Wentz, I.; Bortolozzo, F.P. Effects of pigs per feeder hole and group size on feed intake onset, growth performance, and ear and tail lesions in nursery pigs with consistent space allowance. J. Swine Health Prod. 2019, 27, 12–18. [Google Scholar] [CrossRef]
  35. Jensen, M.B.; Pedersen, L.J. Effects of feeding level and access to rooting material on behaviour of growing pigs in situations with reduced feeding space and delayed feeding. Appl. Anim. Behav. Sci. 2010, 123, 1–6. [Google Scholar] [CrossRef]
  36. Pedersen, L.J.; Herskin, M.S.; Forkman, B.; Halekoh, U.; Kristensen, K.M.; Jensen, M.B. How much is enough? The amount of straw necessary to satisfy pigs’ need to perform exploratory behaviour. Appl. Anim. Behav. Sci. 2014, 160, 46–55. [Google Scholar] [CrossRef]
  37. Hansen, L.L.; Hagelsø, A.M.; Madsen, A. Behavioural results and performance of bacon pigs fed ad libitum from one or several self-feeders. Appl. Anim. Ethol. 1982, 8, 307–333. [Google Scholar] [CrossRef]
  38. Moinard, C.; Mendl, M.; Nicol, C.J.; Green, L.E. A case-control study of on-farm risk factors for tail biting in pigs. Appl. Anim. Behav. Sci. 2003, 81, 333–355. [Google Scholar] [CrossRef]
  39. Aarnink, A.J.A.; Schrama, J.W.; Heetkamp, M.J.W.; Stefanowska, J.; Huynh, T.T.T. Temperature and body weight affect fouling of pig pens. J. Anim. Sci. 2006, 84, 2224–2231. [Google Scholar] [CrossRef] [PubMed]
  40. Savary, P.; Gygax, L.; Wechsler, B.; Hauser, R. Effect of a synthetic plate in the lying area on lying behaviour, degree of fouling and skin lesions at the leg joints of finishing pigs. Appl. Anim. Behav. Sci. 2009, 118, 20–27. [Google Scholar] [CrossRef]
Figure 1. Illustration of a pig vocalization in different domains. From left to right: time domain (linear scale), time domain (transformed to amplitude in dB scale) and frequency domain. Abbreviations: s = seconds; db = decibel; Hz = Hertz.
Figure 1. Illustration of a pig vocalization in different domains. From left to right: time domain (linear scale), time domain (transformed to amplitude in dB scale) and frequency domain. Abbreviations: s = seconds; db = decibel; Hz = Hertz.
Animals 15 02572 g001
Figure 2. Illustration of a sound’s frequency spectrum, cumulative intensity as well as the three characteristic frequency quartiles. Abbreviation: Hz = Hertz.
Figure 2. Illustration of a sound’s frequency spectrum, cumulative intensity as well as the three characteristic frequency quartiles. Abbreviation: Hz = Hertz.
Animals 15 02572 g002
Figure 3. Visualization of the amplitude-based features ∑Ai and Ā extracted from the time signal. The individual peak-to-peak dimension Ai (green arrows) is the differences between two consecutive extrema (minimum [min] and maximum [max]) within the time signal. Abbreviation: s = seconds.
Figure 3. Visualization of the amplitude-based features ∑Ai and Ā extracted from the time signal. The individual peak-to-peak dimension Ai (green arrows) is the differences between two consecutive extrema (minimum [min] and maximum [max]) within the time signal. Abbreviation: s = seconds.
Animals 15 02572 g003
Figure 4. (a) Distribution of ∑Ai as a boxplot with the box representing the interquartile range IQR of the data (50% between 25% and 75% quartiles). The lines inside the boxes indicate the median. The upper and lower fences are defined as the lower quartile (q1) and the upper quartile (q3) with the interquartile range (IQR = q3−1) as well as the whiskers, defined as low = q1–1.5*IQR OR minimum of the distribution and high = q3 + 1.5*IQR OR maximum of the distribution. (b) Medians, q1 and q3 were plotted in radar charts. The solid line indicates the median and the marked area the interquartile range of the distribution. The axes of the radar chart were selected for all distributions shown so that the minimum of the axis corresponds to 0.5*q1pig_vocalizations and the maximum to two*q3pig_vocalizations.
Figure 4. (a) Distribution of ∑Ai as a boxplot with the box representing the interquartile range IQR of the data (50% between 25% and 75% quartiles). The lines inside the boxes indicate the median. The upper and lower fences are defined as the lower quartile (q1) and the upper quartile (q3) with the interquartile range (IQR = q3−1) as well as the whiskers, defined as low = q1–1.5*IQR OR minimum of the distribution and high = q3 + 1.5*IQR OR maximum of the distribution. (b) Medians, q1 and q3 were plotted in radar charts. The solid line indicates the median and the marked area the interquartile range of the distribution. The axes of the radar chart were selected for all distributions shown so that the minimum of the axis corresponds to 0.5*q1pig_vocalizations and the maximum to two*q3pig_vocalizations.
Animals 15 02572 g004
Figure 5. Overview of the subsample selected for acoustic analysis of pig sounds, including the proportional allocation of sounds to the specific behavior–vocalization categories “positive/neutral” and “negative”, as well as to the category “others”.
Figure 5. Overview of the subsample selected for acoustic analysis of pig sounds, including the proportional allocation of sounds to the specific behavior–vocalization categories “positive/neutral” and “negative”, as well as to the category “others”.
Animals 15 02572 g005
Figure 6. Statistical analysis of audio data of all recorded pig vocalizations. (a) Time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation) and (b) frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal).
Figure 6. Statistical analysis of audio data of all recorded pig vocalizations. (a) Time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation) and (b) frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal).
Animals 15 02572 g006
Figure 7. Statistical analysis of audio data for sounds in negative and positive/neutral situations. (a) time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation) and (b) frequency- dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal).
Figure 7. Statistical analysis of audio data for sounds in negative and positive/neutral situations. (a) time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation) and (b) frequency- dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal).
Animals 15 02572 g007
Figure 8. Comparison of the statistical analysis of audio data for sounds during oral manipulation and physical contact. (a) Time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation) and (b) frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal).
Figure 8. Comparison of the statistical analysis of audio data for sounds during oral manipulation and physical contact. (a) Time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation) and (b) frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal).
Animals 15 02572 g008
Figure 9. Statistical analysis of audio data for sounds during negative, positive/neutral and alert situations. (a) Time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation), display area was adjusted with upper limits corresponding to 5.5*q3pig vocalizations and (b) frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal).
Figure 9. Statistical analysis of audio data for sounds during negative, positive/neutral and alert situations. (a) Time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation), display area was adjusted with upper limits corresponding to 5.5*q3pig vocalizations and (b) frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal).
Animals 15 02572 g009
Figure 10. Statistical analysis of audio data of all pig vocalizations and sounds during sneezing and coughing. (a) Time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation) and (b) frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal) with the maxima of the axes 3*q3pig_vocalisations of the respective features.
Figure 10. Statistical analysis of audio data of all pig vocalizations and sounds during sneezing and coughing. (a) Time-dependent features (Ā = mean level of the individual amplitude modulations, Var = variance of the time signal, ∑Ai = cumulative amplitude modulation) and (b) frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal) with the maxima of the axes 3*q3pig_vocalisations of the respective features.
Animals 15 02572 g010
Table 1. Classification of pig behavior associated with sounds and definition.
Table 1. Classification of pig behavior associated with sounds and definition.
Sound ClassificationObserved Behavior and SoundDefinition
VocalizationsPositive/neutralGrunting Grunting by a pig, often as a contact call during social interaction
Playing behavior Pig or a group of pigs scampering around the pen during vocalization (barking)
NegativeConflict over resources Agonistic behavior at the feeder, drinking nipple or enrichment material
Fight Agonistic behavior between two pigs without recognizable resource conflict, rank fight
Oral manipulation A pig manipulates another pig with its mouth/nose, e.g., ear nibbling, tail nibbling/biting, belly nosing
Aversive physical contact A pig steps aversively on another sitting or lying pig or mounts a standing pig
AlertAlertPigs bark, abruptly stopping their previous behavior, stand still, raise their heads and erect their ears.
OthersCoughingExplosive expiratory movement generated by the respiratory muscles
Sneezing Explosive expulsion of air through the nose
Ear shaking Rapid, repeated movement of the head from side to side and vice versa. Ears hit the ipsilateral half of the pig’s face when the direction of movement is changed
Table 2. Acoustic features within the different behavior–vocalization combinations. Time-dependent features (∑Ai = cumulative amplitude modulation, Ā = mean level of the individual amplitude modulations, Var = variance of the time signal) and frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal). Abbreviations: dB = decibel; Hz = Hertz.
Table 2. Acoustic features within the different behavior–vocalization combinations. Time-dependent features (∑Ai = cumulative amplitude modulation, Ā = mean level of the individual amplitude modulations, Var = variance of the time signal) and frequency-dependent features (Q1, Q2 and Q3 = first (25%), second (50%) and third (75%) quartile of the cumulative frequency signal). Abbreviations: dB = decibel; Hz = Hertz.
VocalizationNegativePositive/
Neutral
Oral ManipulationPhysical ContactAlertSneezingCoughing
Number of sounds1167296380451235131981
∑Ai/10−1Mean14.5525.702.9913.8127.3324.8820.463.42
q11.802.711.092.683.126.095.181.57
med4.005.631.685.416.9411.828.882.41
q310.3415.882.8312.0815.3023.2720.484.10
Ā/dBMean24.2527.4419.3727.4328.3021.6327.7824.65
q120.3123.6216.5823.8924.7319.5726.1822.55
med24.4228.2719.3727.8729.2421.8828.0824.42
q328.5031.4121.7731.2431.9323.0029.4426.37
Var/10−6Mean39.6471.2514.0639.9674.01145.2234.5310.62
q16.427.325.046.528.2928.438.575.10
med11.3016.637.8613.2217.1869.3214.116.95
q324.6936.3412.5625.3936.71132.7132.9412.29
Q1/HzMean396.26421.32147.83407.81429.94373.63744.50239.58
q181.03103.0865.8180.3999.25282.17155.3964.31
med97.07238.3197.98260.06288.49370.87449.46172.28
q3445.43630.80232.76452.41659.25442.521314.86374.86
Q2/HzMean1017.671073.32319.411034.701084.53652.312003.6693.39
q1298.78341.38238.27361.91398.70479.621078.14437.36
med538.36870.07304.47888.47947.00581.771936.33599.37
q31494.331461.60339.661517.471378.16761.642808.91906.52
Q3/HzMean2113.612287.67663.512454.582271.59997.444127.721529.76
q1593.671048.97348.58958.051176.08704.802945.181246.30
med1386.881803.91399.032268.112105.52789.554244.951554.74
q33339.613448.64786.193754.373255.791207.505142.951785.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nicolaisen, T.J.; Bollmann, K.E.; Hennig-Pauka, I.; Fischer, S.C.L. Framework for Classification of Fattening Pig Vocalizations in a Conventional Farm with High Relevance for Practical Application. Animals 2025, 15, 2572. https://doi.org/10.3390/ani15172572

AMA Style

Nicolaisen TJ, Bollmann KE, Hennig-Pauka I, Fischer SCL. Framework for Classification of Fattening Pig Vocalizations in a Conventional Farm with High Relevance for Practical Application. Animals. 2025; 15(17):2572. https://doi.org/10.3390/ani15172572

Chicago/Turabian Style

Nicolaisen, Thies J., Katharina E. Bollmann, Isabel Hennig-Pauka, and Sarah C. L. Fischer. 2025. "Framework for Classification of Fattening Pig Vocalizations in a Conventional Farm with High Relevance for Practical Application" Animals 15, no. 17: 2572. https://doi.org/10.3390/ani15172572

APA Style

Nicolaisen, T. J., Bollmann, K. E., Hennig-Pauka, I., & Fischer, S. C. L. (2025). Framework for Classification of Fattening Pig Vocalizations in a Conventional Farm with High Relevance for Practical Application. Animals, 15(17), 2572. https://doi.org/10.3390/ani15172572

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop