Next Article in Journal
Automated Calibration Mechanism for Color Filter Integration in Quantitative Schlieren Systems with Rectangular Light Sources
Previous Article in Journal
A Multi-Ray Channel Modelling Approach to Enhance UAV Communications in Networked Airspace
Previous Article in Special Issue
Acoustic Energy Harvested Wireless Sensing for Aquaculture Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TinyML-Based Swine Vocalization Pattern Recognition for Enhancing Animal Welfare in Embedded Systems

by
Tung Chiun Wen
1,
Caroline Ferreira Freire
1,
Luana Maria Benicio
2,
Giselle Borges de Moura
3,
Magno do Nascimento Amorim
1,4 and
Késia Oliveira da Silva-Miranda
1,*
1
Biosystems Engineering Department, “Luiz de Queiroz” College of Agriculture, University of São Paulo (Esalq/USP), Piracicaba, São Paulo 13418-900, Brazil
2
Agricultural & Biological Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
3
Department of Agricultural Engineering, Federal University of Lavras, Lavras, Minas Gerais 37200-900, Brazil
4
Department of Agricultural Engineering, Instituto Federal Goiano (IF Goiano), Urutaí, Goiás 75790-000, Brazil
*
Author to whom correspondence should be addressed.
Inventions 2025, 10(4), 52; https://doi.org/10.3390/inventions10040052
Submission received: 4 April 2025 / Revised: 29 May 2025 / Accepted: 5 June 2025 / Published: 4 July 2025

Abstract

The automatic recognition of animal vocalizations is a valuable tool for monitoring pigs’ behavior, health, and welfare. This study investigates the feasibility of implementing a convolutional neural network (CNN) model for classifying pig vocalizations using tiny machine learning (TinyML) on a low-cost, resource-constrained embedded system. The dataset was collected in 2011 at the University of Illinois at Urbana-Champaign on an experimental pig farm. In this experiment, 24 piglets were housed in environmentally controlled rooms and exposed to gradual thermal variations. Vocalizations were recorded using directional microphones, processed to reduce background noise, and categorized into “agonistic” and “social” behaviors using a CNN model developed on the Edge Impulse platform. Despite hardware limitations, the proposed approach achieved an accuracy of over 90%, demonstrating the potential of TinyML for real-time behavioral monitoring. These findings underscore the practical benefits of integrating TinyML into swine production systems, enabling early detection of issues that may impact animal welfare, reducing reliance on manual observations, and enhancing overall herd management.

1. Introduction

Machine learning (ML) techniques are revolutionizing data analysis by enabling the rapid processing of large datasets and uncovering patterns that are often difficult to detect using traditional methods [1]. In agriculture, ML has been extensively applied to insect management, soil health prediction, crop yield estimation, and the monitoring of animal behavior [1,2,3,4,5].
Among these applications, ML-based behavior classification and welfare assessment have become particularly relevant in swine production, one of the most prominent sectors of global livestock farming. The growing demand for animal protein has driven the expansion of intensive pig farming systems [6], which are characterized by high productivity, reduced labor costs, and limited space per animal. However, this intensification raises critical concerns regarding animal welfare [7,8].
Animal welfare is a fundamental pillar of modern livestock systems, as its compromise can negatively impact both animal health and productivity. According to the Welfare Quality® protocol [9], animal welfare is based on four key principles: good feeding, good housing, good health, and appropriate behavior. The latter encompasses the expression of social behaviors, positive human–animal interactions, and favorable emotional states [10]. In swine production, vocalizations have emerged as key indicators of health and welfare, as they are closely associated with specific behavioral events [11]. For example, coughing may indicate respiratory illness, whereas squealing is often linked to environmental stress, aggression, or pain [12,13].
Vocal pattern analysis provides valuable insights into social behaviors through the assessment of sound frequency, duration, and amplitude. Low-pitched vocalizations, such as grunts, may reflect social bonding [14], whereas changes in pitch and intensity can reveal stress or discomfort. Stressful situations, such as isolation, castration, or weaning, typically elicit higher-pitched, more frequent, and prolonged vocalizations. Additionally, high-pitched calls may indicate food deprivation [14]. Nevertheless, direct human observation presents limitations, such as observer bias and interference with natural behavior.
In light of these challenges, sound-based monitoring technologies have emerged as promising tools in precision livestock farming, particularly due to advancements in sensor technology and data processing [15]. However, the acoustic environment in swine facilities is complex, with overlapping sounds [16], thus, requiring robust classification algorithms. Recent studies have employed ML to correlate specific vocalizations with behavioral states. For instance, Yin et al. [17] used convolutional neural networks (CNNs) to detect coughs, achieving 96.8% accuracy in identifying respiratory diseases. Liao et al. [18] developed the transformer CNN model, which combines CNNs and transformer layers, reaching 96.05% accuracy. Other works by Hou et al. and Pann et al. [19,20] reported classification accuracies above 93% for grunts, squeals, and coughs.
Despite these advances, most models have not been adapted for deployment on low-cost embedded systems, which are essential for real-time monitoring in commercial farms [17]. As highlighted by Reza et al. [15], key challenges include achieving high accuracy while minimizing costs. A promising solution is tiny machine learning (TinyML), which enables the execution of sophisticated ML models on microcontrollers and IoT devices, overcoming constraints related to hardware, memory, and processing power [21,22].
In this context, the primary objective of this study is to develop an automated system to classify agonistic and social behaviors in pigs through vocalization analysis using accessible computing hardware. The proposed model, based on a convolutional neural network (CNN), is implemented on embedded devices such as smartphones via TinyML, aiming to provide a feasible solution for real-time welfare monitoring.

2. Materials and Methods

The dataset used in this study was collected in 2011 at the experimental pig facility of the University of Illinois Urbana-Champaign. Despite the age of the data, pig behavioral responses to temperature variations and social interactions remain consistent, making this dataset suitable for the proposed analysis.

2.1. Experimental Setup

The study was conducted over two weeks in June and July 2011 at the Experimental Swine Unit of the University of Illinois Urbana-Champaign. The pigs were housed in a climate-controlled facility with four identical rooms, each measuring 9.30 m (L) × 8.5 m (W) × 2.2 m (H), with insulated walls and ceilings to ensure thermal stability.
Twenty-four newly weaned Landrace × Large White piglets (19 days old) were selected, separated by sex, and randomly distributed into four pens, each housing six animals (three males and three females). After a seven-day acclimatization period, the groups underwent a dominance re-establishment phase under controlled thermal conditions.
Each pen (Figure 1) was equipped with a drinker and three ad libitum feeders. A unidirectional microphone was placed 80 cm above the floor to record sound. All procedures were approved by the Institutional Animal Care and Use Committee (IACUC) under protocol 11083.
Treatments were administered at two-day intervals, with temperatures gradually increasing. The first treatment, conducted during the first week, consisted of alternating days of thermal comfort (27 °C) and moderate heat (35 °C). The second treatment, carried out during the second week, involved alternating days of thermal comfort (24 °C) and high heat (34 °C). In both treatments, the temperature increased by 2 °C every three hours.

2.2. Audio Acquisition

A unidirectional cardioid microphone (XM8500, Behringer Inc., Bothell, WA, USA) was used to collect the animals’ vocalizations. The microphone was connected to a signal amplifier (Micropower PS400, Behringer Inc., Bothell, WA, USA), which was linked to an audio and graphics card connected to a microcomputer. Unidirectional microphones capture sound from a single direction, making them more responsive to animal sound stimuli while reducing external noise from outside the pen. All recordings were saved in WAV format (waveform audio file) with a duration of ten seconds each. The audio files were grouped according to the time of day into four periods: dawn (00:00 a.m.–05:55 a.m.), morning (06:00 a.m.–11:55 a.m.), afternoon (12:00 p.m.–05:55 p.m.) and night (06:00 p.m.–11:55 p.m.). For this study, recordings from two days were used, with samples randomly distributed across the four time periods and thermal treatments.

2.3. Audio Processing

A preliminary analysis of the database was conducted to select the relevant vocalizations. The recordings were categorized based on the following observed behaviors: noises related to approaching the feeder, noises related to approaching the drinker, social vocalizations, agonistic vocalizations, and others (defined as recordings containing multiple behavioral categories simultaneously). All behaviors were identified based on the ethogram described by Massari et al. [23].
From the selected database, 25 audio files were labeled as ‘agonistic’ and another 25 as ‘social’. The aim of this study was not to evaluate the impact of heat stress, but rather to identify two types of behavior (agonistic and social) using machine learning techniques. These files were imported into Audacity® for processing. To reduce environmental noise and retain only information relevant to behavioral analysis using the convolutional neural network (CNN) model, a band-pass filter with cutoff frequencies of 650 Hz and 8 kHz was applied, in accordance with the sampling theorem [24]. This theorem states that the sampling frequency must be at least twice the highest frequency in the signal.
During audio processing, segments of silence were detected and removed to ensure that the resulting audio contained only agonistic or social sounds. Silence removal settings used a threshold of −20 dB and a minimum duration of 0.5 s. Each audio clip was trimmed to a maximum length of two seconds so that only the vocalizations associated with each behavior were retained for training the model. Shorter clips capture less variability in sound, improving classification accuracy and allowing a greater number of training samples to be generated. After processing and compiling the audio clips into uniform time units, the final dataset contained 3 min and 46 s of audio. These clips were formatted to meet the input requirements of the machine learning platform used for model training.
The final dataset consisted of 2 min and 38 s of training data (1 min and 11 s of ‘social’ and 1 min and 27 s of ‘agonistic’) and 1 min and 8 s of testing data (54 s of ‘social’ and 54 s of ‘agonistic’). However, due to the noise reduction step during processing, where some samples underwent more extensive filtering, it was not possible to achieve a perfectly balanced distribution between the two classes during training and testing.

2.4. Model Development

Model development was carried out using Edge Impulse, an artificial intelligence (AI) platform that enables the deployment of advanced machine learning models on embedded devices via TinyML (Edge Impulse) [25], eliminating the need for external processing and increasing energy efficiency [26].
To balance the class distribution, undersampling was performed by randomly selecting samples from the majority class until both classes had equal representation. The dataset was then split into training (60%) and testing (40%) sets, with random stratification using the data augmentation tool provided by Edge Impulse. The model was trained for 250 epochs with a batch size of 128, using the stochastic gradient descent (SGD) optimizer with a learning rate of 0.01. No early stopping criteria were applied, and the hardware/software environment remained fixed throughout the experiments.
A convolutional neural network (CNN) architecture was used to train the animal vocalization classification model. This architecture was selected because it allows audio signals to be converted into spectrogram images for pattern recognition, thereby improving classification accuracy [17]. Figure 2 illustrates the proposed CNN architecture. The input layer (serving_default_x_0) accepts an input of size 1 × 16,000 (the original audio length), which is then reshaped into a 2D format of 1 × 50 × 32 × 1 to facilitate convolutional operations.
The first Conv2D layer applies eight filters of size 3 × 3, followed by a ReLU activation function, producing an output of 1 × 50 × 32 × 8. A Max-Pooling layer then reduces the spatial dimensions, while preserving features, to 1 × 25 × 16 × 8. The second Conv2D layer increases the depth to 16 channels using 16 filters (3 × 3), followed by another ReLU activation and Max-Pooling, resulting in an output of 1 × 13 × 8 × 16. The third Conv2D layer, with 32 filters (3 × 3) and ReLU activation, extracts more complex features. After another Max-Pooling operation, the output is 1 × 7 × 4 × 32. The fourth and final Conv2D layer increases the depth to 64 channels using 64 filters (3 × 3), followed by Max-Pooling, which reduces the output size to 1 × 4 × 2 × 64.
The Reshape layer flattens the 3D feature maps into a 1D vector of size 1 × 512 for classification. This vector is passed through a fully connected (dense) layer with 512 neurons. Finally, a Softmax layer outputs a probability vector of size 1 × 2, classifying the input into one of two categories: ‘agonistic’ or ‘social’.
After completing the training phase, the developed model was converted into a TinyML model—a compressed version optimized for deployment on embedded devices. Subsequently, the TinyML model was implemented in an IoT device simulator to evaluate its performance under real-world conditions (Figure 3).
The model’s performance was assessed using the following metrics: accuracy, precision, F1-score, and sensitivity. Accuracy measures the overall correctness of the model by calculating the ratio of correctly predicted samples (true positives and true negatives) to the total number of samples. Precision quantifies the proportion of true positive predictions among all positive predictions made by the model. It is calculated as the number of true positives divided by the sum of true positives and false positives. Sensitivity (also known as recall) measures the model’s ability to correctly identify positive instances. It is computed by dividing the number of true positives by the sum of true positives and false negatives. The F1-score is the harmonic mean of precision and sensitivity. It provides a balanced evaluation by considering both precision and recall, which is particularly important when dealing with imbalanced datasets.

3. Results

During training, the model underwent multiple cycles of synaptic weight adjustment based on the error calculated in each iteration to optimize its accuracy [27], ultimately achieving a performance of 96.6%. Analysis of the confusion matrix (Table 1) revealed that the model correctly classified 100% of the agonistic vocalizations and 93.8% of the social vocalizations. These results indicate that the model effectively learned and standardized the extracted audio features, accurately distinguishing between agonistic and social behaviors.
When the trained model was applied to the test dataset, data not used during training, it achieved an accuracy of 92.08%. The confusion matrix for the test dataset (Table 2) showed that agonistic behavior was correctly classified 85.2% of the time, with 14.8% misclassified as social behavior. In contrast, all instances of social behavior were correctly identified (100%).
Table 3 presents the performance metrics for the test dataset. The model achieved a precision of 85.2% for agonistic behavior, indicating that 85.2% of the samples predicted as agonistic were correctly classified. However, some social vocalizations were misclassified as agonistic. For social behavior, the model achieved a precision of 100%, meaning all predicted instances of social behavior were correct.
Sensitivity followed a similar trend. Social behavior had a sensitivity of 100%, indicating that all actual instances were correctly identified. Agonistic behavior had a sensitivity of 85.5%, meaning that while most instances were correctly detected, some were missing. The F1-score, which balances precision and sensitivity, was 92.0% for agonistic behavior and 100.0% for social behavior. These metrics underscore the model’s overall effectiveness in distinguishing between the two behavioral classes.
The compressed TinyML model, based on the previously trained CNN, was simulated using Edge Impulse’s online platform (Figure 4). During simulation, the model exhibited a confidence level of 0.90 when classifying an unseen audio sample as agonistic (Figure 4a) and 0.85 for social behavior (Figure 4c). Notably, when the confidence level dropped below 0.50, the model classified the input as “No event detected” (Figure 4b). The simulation also revealed an inference time of 244 ms, with memory usage of 23.1 KB RAM and 72.7 KB Flash.
Overall, the model demonstrated strong performance in behavior classification, maintaining reliability even when deployed on a resource-constrained mobile device.

4. Discussion

The classification of vocalization patterns yielded satisfactory results, with the convolutional neural network (CNN) model achieving over 90% accuracy, even when deployed on a low-cost platform with limited memory and hardware resources. Similar levels of accuracy were reported by Yin et al. [17], who achieved 96.8% in classifying cough sounds in pigs. However, those authors did not implement their complex CNN model on constrained hardware, which underscores the feasibility of the TinyML approach presented in this study for real-time animal vocalization monitoring in environments such as farms, where computing resources may be limited.
Nevertheless, classification errors, particularly for agonistic behavior, highlight the need for a more comprehensive analysis in future phases. This will include augmenting the dataset with recordings collected on additional days. Despite a careful selection and organization of audio samples to accurately differentiate between the two behaviors, it is possible that low-intensity agonistic vocalizations were misclassified as social interactions. Additionally, although background noise was minimized through filtering and the use of unidirectional microphones, some residual noise may still have contributed to misclassification.
These challenges are consistent with findings by Hou et al. and Reza et al. [15,19], who, although studying different behaviors, also reported that low-intensity pig grunts could be difficult to distinguish from coughing or vocal syncope. Moreover, interference such as background noise or human activity was found to negatively impact classification accuracy in their studies as well.
Beyond classification accuracy, the use of TinyML in behavioral monitoring systems holds significant promise for real-time pig monitoring. In contrast to traditional monitoring systems that require sophisticated machine learning models and external processing units, TinyML enables on-device processing, reducing energy consumption and eliminating the need for high-bandwidth data transmission. This advantage was also emphasized by other authors [26,28,29], who explored TinyML’s application in low-power embedded systems.
Consequently, TinyML emerges as a viable, cost-effective, remote, and continuous monitoring solution for pig producers. Real-time analysis of vocalization patterns can assist in identifying signs of stress, resource competition, or fighting, thus providing critical information for informed herd management. Future studies should explore improvements such as adaptive noise filtering, real-time model retraining, and dataset augmentation to enhance classification robustness across varying environmental conditions and to include a broader range of animal behaviors in the model.

5. Conclusions

This study demonstrated the feasibility of employing tiny machine learning (TinyML) to recognize pig vocalizations, achieving over 90% classification accuracy despite implementation on a low-cost, low-memory embedded system. These results highlight the potential of TinyML-based vocal monitoring for real-time behavioral classification in animals, offering a scalable and efficient alternative to more complex machine learning solutions.
Future research should focus on testing and refining this methodology by incorporating datasets with greater variability in vocalizations and background noise. Next steps include developing hardware prototypes for real-time application and further improving model performance. Integrating TinyML-based vocalization recognition into farm management practices holds great potential to facilitate early detection of health or welfare issues in livestock, reduce reliance on manual monitoring, and ultimately enhance animal welfare outcomes.

Author Contributions

Conceptualization, T.C.W., G.B.d.M. and K.O.d.S.-M.; methodology, T.C.W., C.F.F. and L.M.B.; software, T.C.W. and L.M.B.; validation, T.C.W., C.F.F. and L.M.B.; formal analysis, L.M.B., G.B.d.M. and M.d.N.A.; investigation, T.C.W. and C.F.F.; resources, K.O.d.S.-M.; data curation, T.C.W. and K.O.d.S.-M.; writing—original draft preparation, T.C.W.; writing—review and editing, L.M.B. and M.d.N.A.; visualization, L.M.B., G.B.d.M. and M.d.N.A.; supervision, K.O.d.S.-M.; project administration, K.O.d.S.-M.; funding acquisition, K.O.d.S.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by São Paulo Research Foundation (FAPESP), grant number 19/12013-6 and 21/07127-2, and by Fundação de Estudos Agrários Luiz de Queiroz (Fealq).

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional Neural Network
TinyMLTiny Machine Learning

References

  1. Romano, D. Novel Automation, AI, and Biomimetic Engineering Advancements for Insect Studies and Management. Curr. Opin. Insect Sci. 2025, 68, 101337. [Google Scholar] [CrossRef] [PubMed]
  2. Amorim, M.N.; Turco, S.H.N.; dos Santos Costa, D.; Ferreira, I.J.S.; da Silva, W.P.; Sabino, A.L.C.; da Silva-Miranda, K.O. Discrimination of ingestive behavior in sheep using an electronic device based on a triaxial accelerometer and machine learning. Comput. Electron. Agric. 2024, 218, 108657. [Google Scholar] [CrossRef]
  3. Wilhelm, R.C.; van Es, H.M.; Buckley, D.H. Predicting measures of soil health using the microbiome and supervised machine learning. Soil Biol. Biochem. 2022, 164, 108472. [Google Scholar] [CrossRef]
  4. Dhaliwal, D.S.; Williams, M.M. Sweet corn yield prediction using machine learning models and field-level data. Precis. Agric. 2024, 25, 51–64. [Google Scholar] [CrossRef]
  5. Amorim, M.N.; dos Santos Costa, D.; dos Santos Harada, É.; da Silva, W.P.; Turco, S.H.N. Performance of electronic device and different visual observation intervals in assessing feeding behavior in sheep. Comput. Electron. Agric. 2025, 231, 110053. [Google Scholar] [CrossRef]
  6. Zhang, L.; Mao, Y.; Chen, Z.; Hu, X.; Wang, C.; Lu, C.; Wang, L. A systematic review of life-cycle GHG emissions from intensive pig farming: Accounting and mitigation. Sci. Total Environ. 2024, 907, 168112. [Google Scholar] [CrossRef]
  7. Moreira, M.D.R.; Trabachini, A.; Amorim, M.D.N.; Harada, É.D.S.; da Silva, M.A.; Silva-Miranda, K.O.D. The Perception of Brazilian Livestock Regarding the Use of Precision Livestock Farming for Animal Welfare. Agriculture 2024, 14, 1315. [Google Scholar] [CrossRef]
  8. Trabachini, A.; Dias, C.S.; Moreira, M.R.; Wen, T.C.; Caneppele, F.L.; Harada, É.S.; Amorim, M.N.; Miranda, K.O.S. Automation to improve pig welfare using fuzzy logic. Rev. Bras. Ciênc. Agrár. 2024, 19, e3532. [Google Scholar] [CrossRef]
  9. Welfare Quality® Protocol. Welfare Quality® Assessment Protocol for Poultry (Broilers, Laying Hens); Welfare Quality® Consortium: Lelystad, The Netherlands, 2009; Available online: https://edepot.wur.nl/233471 (accessed on 23 May 2024).
  10. Maes, D.G.; Dewulf, J.; Piñeiro, C.; Edwards, S.; Kyriazakis, I. A critical reflection on intensive pork production with an emphasis on animal health and welfare. J. Anim. Sci. 2020, 98, S15–S26. [Google Scholar] [CrossRef]
  11. Heseker, P.; Bergmann, T.; Scheumann, M.; Traulsen, I.; Kemper, N.; Probst, J. Detecting tail biters by monitoring pig screams in weaning pigs. Sci. Rep. 2024, 14, 4523. [Google Scholar] [CrossRef]
  12. Olczak, K.; Penar, W.; Nowicki, J.; Magiera, A.; Klocek, C. The role of sound in livestock farming—Selected aspects. Animals 2023, 13, 2307. [Google Scholar] [CrossRef] [PubMed]
  13. Xie, Y.; Wang, J.; Chen, C.; Yin, T.; Yang, S.; Li, Z.; Zhang, Y.; Ke, J.; Song, L.; Gan, L. Sound identification of abnormal pig vocalizations: Enhancing livestock welfare monitoring on smart farms. Inf. Process. Manag. 2024, 61, 103770. [Google Scholar] [CrossRef]
  14. Matthews, S.G.; Miller, A.L.; Clapp, J.; Plötz, T.; Kyriazakis, I. Early detection of health and welfare compromises through automated detection of behavioural changes in pigs. Vet. J. 2016, 217, 43–51. [Google Scholar] [CrossRef]
  15. Reza, M.N.; Ali, M.R.; Haque, M.A.; Jin, H.; Kyoung, H.; Choi, Y.K.; Kim, G.; Chung, S.O. A review of sound-based pig monitoring for enhanced precision production. J. Anim. Sci. Technol. 2025, 67, 277. [Google Scholar] [CrossRef]
  16. Wang, X.; Yin, Y.; Dai, X.; Shen, W.; Kou, S.; Dai, B. Automatic detection of continuous pig cough in a complex piggery environment. Biosyst. Eng. 2024, 238, 78–88. [Google Scholar] [CrossRef]
  17. Yin, Y.; Tu, D.; Shen, W.; Bao, J. Recognition of sick pig cough sounds based on convolutional neural network in field situations. Inf. Process. Agric. 2021, 8, 369–379. [Google Scholar] [CrossRef]
  18. Liao, J.; Li, H.; Feng, A.; Wu, X.; Luo, Y.; Duan, X.; Ni, J.; Li, J. Domestic pig sound classification based on TransformerCNN. Appl. Intell. 2023, 53, 4907–4923. [Google Scholar] [CrossRef]
  19. Hou, Y.; Li, Q.; Wang, Z.; Liu, T.; He, Y.; Li, H.; Ren, Z.; Guo, X.; Yang, G.; Liu, Y.; et al. Study on a Pig Vocalization Classification Method Based on Multi-Feature Fusion. Sensors 2024, 24, 313. [Google Scholar] [CrossRef]
  20. Pann, V.; Kwon, K.S.; Kim, B.; Jang, D.H.; Kim, J.B. DCNN for Pig Vocalization and Non-Vocalization Classification: Evaluate Model Robustness with New Data. Animals 2024, 14, 2029. [Google Scholar] [CrossRef]
  21. Tsoukas, V.; Gkogkidis, A.; Boumpa, E.; Kakarountas, A. A Review on the Emerging Technology of TinyML. ACM Comput. Surv. 2024, 56, 1–37. [Google Scholar] [CrossRef]
  22. Lin, J.; Zhu, L.; Chen, W.M.; Wang, W.C.; Han, S. Tiny Machine Learning: Progress and Futures [Feature]. IEEE Circuits Syst. Mag. 2023, 23, 8–34. [Google Scholar] [CrossRef]
  23. Massari, J.M.; Curi, T.M.R.D.C.; De Moura, D.J.; Medeiros, B.B.L.; Salgado, D.D.A. Behavioral characteristics of different gender division of growing and finishing swine in “wean to finish” system. Eng. Agríc. 2015, 35, 646–656. [Google Scholar] [CrossRef]
  24. Silva, J.P.; de Alencar Nääs, I.; Abe, J.M.; da Silva Cordeiro, A.F. Classification of piglet (Sus Scrofa) stress conditions using vocalization pattern and applying paraconsistent logic Eτ. Comput. Electron. Agric. 2019, 166, 105020. [Google Scholar] [CrossRef]
  25. Advanced ML for Every Solution. Available online: https://www.edgeimpulse.com (accessed on 1 April 2025).
  26. Hymel, S.; Banbury, C.; Situnayake, D.; Elium, A.; Ward, C.; Kelcey, M.; Baaijens, M.; Majchrzycki, M.; Plunkett, J.; Tischler, D.; et al. Edge Impulse: An MLOps Platform for Tiny Machine Learning. arXiv 2022. Available online: http://arxiv.org/abs/2212.03332 (accessed on 1 April 2025).
  27. Sharma, S.; Chaudhary, P. Machine learning and deep learning. In Quantum Computing and Artificial Intelligence: Training Machine and Deep Learning Algorithms on Quantum Computers; Walter de Gruyter: Berlin, Germany, 2023; pp. 71–84. [Google Scholar] [CrossRef]
  28. Iodice, G.M.; Naughton, R. TinyML Cookbook: Combine Artificial Intelligence and Ultra-Low-Power Embedded Devices to Make the World Smarter, 1st ed.; Packt Publishing Ltd.: Birmingham, UK, 2022. [Google Scholar]
  29. Lacamera, D. Embedded Systems Architecture; Packt Publishing Ltd.: Birmingham, UK, 2018. [Google Scholar]
Figure 1. Top view of the animal pens.
Figure 1. Top view of the animal pens.
Inventions 10 00052 g001
Figure 2. Overall structure of the proposed convolution neural network architecture model.
Figure 2. Overall structure of the proposed convolution neural network architecture model.
Inventions 10 00052 g002
Figure 3. Steps to build the classification swine vocalization based on the TinyML application.
Figure 3. Steps to build the classification swine vocalization based on the TinyML application.
Inventions 10 00052 g003
Figure 4. Simulation of TinyML model on a mobile device, illustrating: (a) when the behavior is classified as agonistic; (b) when no event is detected; and (c) when the behavior is classified as social.
Figure 4. Simulation of TinyML model on a mobile device, illustrating: (a) when the behavior is classified as agonistic; (b) when no event is detected; and (c) when the behavior is classified as social.
Inventions 10 00052 g004
Table 1. Confusion matrix from the training phase, showing predictions for agonistic and social behaviors.
Table 1. Confusion matrix from the training phase, showing predictions for agonistic and social behaviors.
True Class
AgonisticSocial
Predicted classAgonistic100.0%0.0%
Social6.3%93.8%
Table 2. Confusion matrix from the testing phase, showing predictions for agonistic and social behaviors.
Table 2. Confusion matrix from the testing phase, showing predictions for agonistic and social behaviors.
True Class
AgonisticSocial
Predicted classAgonistic85.2%14.8%
Social0.0%100.0%
Table 3. Performance metrics—precision, sensitivity, and F1-score—based on the test dataset.
Table 3. Performance metrics—precision, sensitivity, and F1-score—based on the test dataset.
PrecisionSensitivityF1-Score
Agonistic85.2%85.5%92.0%
Social100.0%100.0%100.0%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wen, T.C.; Freire, C.F.; Benicio, L.M.; de Moura, G.B.; Amorim, M.d.N.; da Silva-Miranda, K.O. TinyML-Based Swine Vocalization Pattern Recognition for Enhancing Animal Welfare in Embedded Systems. Inventions 2025, 10, 52. https://doi.org/10.3390/inventions10040052

AMA Style

Wen TC, Freire CF, Benicio LM, de Moura GB, Amorim MdN, da Silva-Miranda KO. TinyML-Based Swine Vocalization Pattern Recognition for Enhancing Animal Welfare in Embedded Systems. Inventions. 2025; 10(4):52. https://doi.org/10.3390/inventions10040052

Chicago/Turabian Style

Wen, Tung Chiun, Caroline Ferreira Freire, Luana Maria Benicio, Giselle Borges de Moura, Magno do Nascimento Amorim, and Késia Oliveira da Silva-Miranda. 2025. "TinyML-Based Swine Vocalization Pattern Recognition for Enhancing Animal Welfare in Embedded Systems" Inventions 10, no. 4: 52. https://doi.org/10.3390/inventions10040052

APA Style

Wen, T. C., Freire, C. F., Benicio, L. M., de Moura, G. B., Amorim, M. d. N., & da Silva-Miranda, K. O. (2025). TinyML-Based Swine Vocalization Pattern Recognition for Enhancing Animal Welfare in Embedded Systems. Inventions, 10(4), 52. https://doi.org/10.3390/inventions10040052

Article Metrics

Back to TopTop