Next Article in Journal
A Multisite Study of an Animated Cinematic Clinical Narrative for Anticoagulant Pharmacology Education
Previous Article in Journal
NornirNet: A Deep Learning Framework to Distinguish Benign from Malignant Type II Endoleaks After Endovascular Aortic Aneurysm Repair Using Preoperative Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advances in Audio-Based Artificial Intelligence for Respiratory Health and Welfare Monitoring in Broiler Chickens

1
Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea
2
Department of Animal Science and Veterinary Medicine, Gopalganj Science and Technology University, Gopalganj 8105, Bangladesh
3
Department of Multimedia Engineering, Sunchon National University, Suncheon 57922, Republic of Korea
4
Interdisciplinary Program in IT-Bio Convergence System (BK21 Plus), Sunchon National University, Suncheon 57922, Republic of Korea
5
Department of Poultry Science, Sylhet Agricultural University, Sylhet 3100, Bangladesh
6
Interdisciplinary Program in IT-Bio Convergence System (BK21 Plus), Chonnam National University, Gwangju 61186, Republic of Korea
7
School Education Department, Narowal 51600, Pakistan
*
Author to whom correspondence should be addressed.
Both authors have equal contributions to this work as co-first authors.
Submission received: 6 January 2026 / Revised: 27 January 2026 / Accepted: 2 February 2026 / Published: 4 February 2026
(This article belongs to the Section AI Systems: Theory and Applications)

Abstract

Respiratory diseases and welfare impairments impose substantial economic and ethical burdens on modern broiler production, driven by high stocking density, rapid pathogen transmission, and limited sensitivity of conventional monitoring methods. Because respiratory pathology and stress directly alter vocal behavior, acoustic monitoring has emerged as a promising non-invasive approach for continuous flock-level surveillance. This review synthesizes recent advances in audio classification and artificial intelligence for monitoring respiratory health and welfare in broiler chickens. We have reviewed the anatomical basis of sound production, characterized key vocal categories relevant to health and welfare, and summarized recording strategies, datasets, acoustic features, machine-learning and deep-learning models, and evaluation metrics used in poultry sound analysis. Evidence from experimental and commercial settings demonstrates that AI-based acoustic systems can detect respiratory sounds, stress, and welfare changes with high accuracy, often enabling earlier intervention than traditional methods. Finally, we discuss current limitations, including background noise, data imbalance, limited multi-farm validation, and challenges in interpretability and deployment, and outline future directions for scalable, robust, and practical sound-based monitoring systems in broiler production.

Graphical Abstract

1. Introduction

Respiratory diseases remain one of the most significant constraints in broiler production due to their high prevalence, rapid transmission, and severe economic impact. Many broiler farms worldwide report frequent respiratory outbreaks, often with morbidity approaching 100% and substantial losses from reduced growth, poor feed conversion, increased mortality, and carcass condemnation [1,2,3,4,5,6]. Complex respiratory syndromes involving combinations of viral (e.g., Infectious Bronchitis Virus (IBV), Newcastle Disease Virus (NDV), avian influenza, Avian Metapneumovirus (aMPV), Infectious Laryngotracheitis Virus (ILTV)) and bacterial pathogens (e.g., E. coli, Mycoplasma spp., Ornithobacterium rhinotracheale) further exacerbate disease severity, sometimes pushing mortality above 30% [7,8,9,10]. These outcomes are intensified by high stocking density, poor ventilation, elevated ammonia levels, and inadequate biosecurity, while control efforts are complicated by antigenic variation, frequent co-infections, incomplete vaccine protection, and growing antimicrobial resistance [9,11]. As a result, respiratory disease outbreaks have caused multimillion-dollar losses in broiler industries across several countries [1,3,12,13].
Early detection of respiratory diseases is critical because broilers have a very short production cycle, leaving little opportunity for recovery once growth is impaired. Respiratory outbreaks often begin around 2–3 weeks of age and, if uncontrolled, can raise mortality above 10% within a single cycle, with irreversible effects on performance and profitability [8,14]. Rapid pathogen spread and frequent co-infections accelerate disease progression and worsen lesions, emphasizing the need for timely intervention [3,4,15]. Although molecular diagnostics such as Polymerase Chain Reaction (PCR), multiplex assays, and Loop-Mediated Isothermal Amplification (LAMP) offer high sensitivity and specificity, they depend on laboratory infrastructure, skilled personnel, and sample transport, leading to delays that limit their value for immediate on-farm decision-making [16,17,18]. Field studies in countries such as Bangladesh and Tunisia demonstrate that early molecular screening can detect high viral loads during the initial phase of outbreaks, enabling faster control responses; however, such approaches remain impractical for continuous monitoring at scale [16,19]. Traditional visual observation is similarly limited by subjectivity, labor intensity, and low sensitivity to early or subtle clinical signs [20,21,22].
Acoustic monitoring has emerged as a promising alternative as respiratory pathology directly alters vocalizations, including coughs, sneezes, and rales, providing early and biologically meaningful signals [21,23,24,25]. Unlike vision systems or wearable sensors, audio monitoring is non-contact, inexpensive, and well-suited to continuous flock-level surveillance in dense group housing, without requiring individual identification or attachment of devices [25,26,27,28]. Numerous studies report high classification accuracies (>90–98%) for detecting respiratory sounds and diseases such as Newcastle disease and avian influenza using machine learning and deep learning models [23,24,26,29,30,31], and systematic reviews indicate that sound-based systems dominate poultry respiratory Precision Livestock Farming (PLF) applications due to their feasibility and diagnostic value [32]. Artificial intelligence (AI) enables the transformation of raw audio into actionable health indicators through noise reduction, segmentation, feature extraction (e.g., MFCCs (Mel-Frequency Cepstral Coefficients), spectrograms, chroma, temporal descriptors), and supervised classification using models such as CNNs (Convolutional Neural Network), CNN-LSTMs (CNN- Long Short-Term Memory Network), and transfer learning frameworks [33,34,35,36,37,38]. These outputs can be aggregated into meaningful digital biomarkers, including cough rate–based flock health scores, disease severity classifications, and welfare or stress indicators, many of which can be deployed on low-cost edge devices for real-time monitoring [39,40,41,42].
Given the rapid expansion of audio-based monitoring and artificial intelligence in poultry research, a consolidated and critical synthesis focused specifically on broiler respiratory health and welfare is timely. This review seeks to (i) summarize the biological and acoustic basis linking vocalizations to respiratory disease and welfare states, (ii) review recording environments, sensor setups, and annotation strategies used in broiler sound studies, (iii) compare acoustic features and AI models applied to disease, and welfare monitoring, and (iv) identify key limitations and research gaps that hinder large-scale, on-farm deployment.
The manuscript first outlines the physiological basis of sound production and its alteration by respiratory disease and stress. It then reviews data acquisition and annotation practices, followed by a synthesis of acoustic feature representations and AI modeling approaches. Applications in respiratory disease detection, welfare assessment, and growth monitoring are subsequently discussed, together with evaluation metrics and deployment constraints. The review concludes by highlighting critical challenges and future directions for translating audio-based AI systems into reliable commercial tools for broiler production.

2. Review Methodology

This review follows a structured narrative methodology rather than a formal PRISMA meta-analysis, as the objective is conceptual synthesis and methodological comparison. A comprehensive literature search was performed using Web of Science, Scopus, IEEE Xplore, and Google Scholar. Search queries included but not limited to: “broiler chicken respiratory sounds”, “poultry cough detection”, “audio-based disease detection”, “precision livestock farming”, “machine learning”, “deep learning”, “MFCC”, “CNN”, “LSTM”, and “audio spectrogram transformers”. Searches covered publications from 2020 to 2025, reflecting the recent advancement of digital audio processing and AI-based monitoring. Reference lists of key review and experimental papers were also screened to identify additional relevant studies. Titles and abstracts were initially screened for relevance, followed by full-text assessment of eligible articles. Rather than quantitatively aggregating performance metrics, the selected studies were qualitatively synthesized to compare feature representations, data characteristics, and deployment contexts. Emphasis was placed on identifying conceptual trends, methodological trade-offs, and recurring limitations, particularly those affecting model generalization, interpretability, and real-world applicability. To ensure methodological rigor and relevance to the scope of this review, explicit inclusion and exclusion criteria were applied during full-text screening. Studies were included if they met at least one of the following criteria:
  • Applied audio or acoustic data to assess respiratory health, disease, stress, or welfare in broiler chickens or closely related poultry species;
  • Developed or evaluated machine-learning or deep-learning models for chicken sound analysis;
  • Provided biologically relevant insights linking sound production to respiratory physiology, pathology, or welfare.
Studies were excluded if they:
  • Focused solely on non-audio sensing modalities (e.g., vision-only systems);
  • Addressed poultry production without respiratory, acoustic, or AI relevance;
  • Were non-peer-reviewed sources lacking methodological transparency.
For visualization purposes, audio recordings were obtained from Mendeley Data- poultry vocalization signal dataset for early disease detection [43]. Six representative samples were selected, including two healthy broiler vocalizations, two environmental noise recordings, and two unhealthy broiler sounds (first two audios from each folder were used). From each audio file, the first 3 s were extracted and resampled to 22.05 kHz. Two time–frequency representations were generated: (i) a standard spectrogram using the Short-Time Fourier Transform (STFT) and (ii) a Mel-spectrogram. The STFT was computed using a Hamming window with a window size of 2048 samples and a hop length of 512 samples. The magnitude spectra were converted to decibel (dB) scale using logarithmic amplitude compression. The Mel-spectrogram was computed using 128 Mel filter banks applied to the power spectrum obtained from the STFT, followed by conversion to the dB scale. These representations were used to visualize spectral differences between healthy, noisy, and unhealthy broiler sounds and to illustrate why Mel-based features dominate modern poultry audio analysis.

3. Respiratory Health and Welfare Challenges in Broiler Chickens

Respiratory health challenges in broiler chickens are predominantly caused by viral pathogens and are frequently compounded by secondary bacterial infections, making diagnosis and control particularly complex. The most prevalent viral agents include IBV, Avian Influenza Virus (AIV; especially H9N2), and NDV, which commonly co-circulate in broiler flocks and contribute to high morbidity and mortality [1,13,15,44,45]. Other important viruses such as ILTV, aMPV, and Avian Pneumovirus (APV) further intensify respiratory disease burdens [3,7,19,46]. Bacterial pathogens, including Mycoplasma gallisepticum, Escherichia coli, Haemophilus paragallinarum, and Chlamydia, often exacerbate viral infections, leading to impaired respiratory function, reduced feed intake, increased mortality, and substantial economic losses [3,7,13]. Given the multifactorial nature of these syndromes, continuous surveillance, vaccination, and integrated control strategies are essential for effective management [3,44,45].
Respiratory diseases in broilers are associated with distinct abnormal sound manifestations that serve as valuable acoustic indicators of health status. Diseased birds typically produce pathological coughs, which differ markedly from normal short, sharp cheeps [47,48]. Sneezes are also widely used as primary indicators in automated respiratory monitoring systems [49]. In addition, snoring- or purring-like sounds—continuous, low-frequency, rumbling vocalizations—are commonly observed during rest and are linked to upper respiratory tract obstruction [25,47,48]. As respiratory disease progresses, birds may also exhibit labored breathing and tracheal rales, clinically described as harsh airway sounds accompanied by sneezing and nasal discharge [3,50,51]. These abnormal vocalizations are consistently exploited as audio biomarkers for detecting infectious bronchitis, Newcastle disease, and broader respiratory syndromes using machine-learning–based systems trained specifically on cough, sneeze, and snore/purr sounds [25,26,31,47,48,52].
Welfare-related stressors further influence vocalization patterns and the prevalence of respiratory sounds throughout the production cycle. Stocking density and social conditions have clearer acoustic effects, with isolated birds producing higher-energy alarm calls indicative of stress, while birds in larger groups show reduced vocal energy, reflecting improved welfare [53]. Even though heat stress appears to have limited direct effects on primary vocalization types, such as distress calls and peeps, it alters activity levels and behavior that may indirectly affect vocal output [54,55,56]. Similarly, direct evidence linking elevated ammonia (NH3) levels to specific vocal changes remains limited; NH3-induced respiratory discomfort likely contributes to abnormal sound production. In addition to the environment, age of broilers itself have influence over the sound they produce. Respiratory sounds are generally infrequent during early life stages but increase markedly after three to four weeks of age, coinciding with higher pathogen exposure and physiological stress [31,48,57,58]. Studies consistently report higher detection rates of coughs, sneezes, and snore-like sounds in broilers older than 26 days, particularly during the finishing phase, highlighting the importance of age-aware acoustic monitoring strategies [57,58]. Figure 1 and Table 1 outline the major respiratory and welfare challenges in broiler chickens and their acoustic relevance.

4. Acoustic Characteristics of Broiler Chicken Vocalizations

Broiler chicken vocalizations, including both normal calls and abnormal respiratory sounds, are produced by the syrinx, an avian vocal organ located at the tracheobronchial junction within the interclavicular air sac [76,77]. In chickens, which are non-songbirds, sound generation primarily involves vibration of the lateral tympaniform membranes and associated labia that function analogously to mammalian vocal folds [77,78]. During expiration, increased air-sac pressure drives airflow through the syrinx, causing membrane vibration via a myoelastic–aerodynamic mechanism, with pitch governed by tissue tension and loudness by vibration amplitude [77,79]. Although most sounds are expiratory, inspiratory phonation has also been reported in birds [76,77,80]. Importantly, both normal vocalizations and pathological respiratory sounds such as coughs and sneezes originate from the same syringeal mechanism, with the larynx, trachea, and oral cavity acting as a vocal tract filter that shapes the final acoustic signal [76,78,80]. The principal vocal structures involved in broiler sound production and their functional roles are summarized in Table 2.
Coughs and sneezes differ acoustically from normal broiler vocalizations in both temporal and spectral characteristics. Coughs and sneezes show sharper onset, higher peak amplitude, and broader frequency bandwidths, reflecting sudden expulsions of air caused by respiratory irritation or obstruction [85,86]. Sneezes are typically brief, high-energy, impulsive sounds with distinct broadband spectral signatures, making them detectable even under noisy farm conditions [49]. In contrast, normal calls, including cheeps, squawks, and distress vocalizations, are generally longer in duration and more tonal, exhibiting structured harmonic patterns and regular frequency modulation [25,52]. These consistent acoustic differences enable machine-learning systems to distinguish pathological respiratory sounds from normal vocal behavior with high reliability [25,49,52].
Stress and pain also modulate broiler vocal behavior by altering call rate, intensity, and spectral structure. Distress calls associated with negative affective states are typically repetitive, high in energy, and exhibit increased tonality with reduced spectral entropy, correlating with key welfare indicators such as growth performance and mortality [67]. Acute stressors may transiently suppress vocal output, whereas chronic stressors such as food deprivation increase call frequency and modify spectral features including centroid and bandwidth [68,87]. Age further modulates these responses, with younger birds showing more pronounced acoustic changes than older individuals [88]. Social context plays an important buffering role, as maternal contact and group housing reduce high-intensity distress calling, highlighting the context-dependent nature of vocal responses to stress and pain [89].
Environmental noise generated by ventilation fans, feeders, heaters, and other mechanical equipment presents a major challenge for accurate detection of broiler respiratory sounds. These noise sources often overlap spectrally with target vocalizations, reducing detection sensitivity, particularly for short, low-energy events such as sneezes [49]. To mitigate background noise in commercial farm environments, advanced signal-processing techniques such as spectral subtraction, wavelet denoising, Wiener filtering, and multi-taper spectral analysis have been widely applied to improve signal-to-noise ratio [47,90]. Wavelet-based methods combined with pulse extraction are particularly effective in suppressing both continuous and transient noise, enabling clearer isolation of respiratory sounds [90]. At the model level, the introduction of a custom Burn Layer in CNNs—injecting controlled random noise during training—has increased robustness to input variability and reduced overfitting while maintaining high sensitivity with fewer parameters [33]. Multi-domain feature extraction, integrating time- and frequency-based features, MFCCs, and sparse representations, combined with feature selection and linear–nonlinear fusion strategies, further enhances classification performance and generalization [91,92]. Transfer learning approaches such as improved TrAdaBoost address age-related variability in broiler vocalizations, improving cross-dataset applicability [26], while confidence-interval-based random forest methods enable more reliable recognition of overlapping sounds in complex acoustic environments [93]. Together, these innovations substantially improve the robustness and deployment readiness of broiler sound classification systems in real-world noisy conditions [26,33,90,91,92,93]. Nonetheless, farm soundscapes remain acoustically complex, underscoring the need for continued development of noise-robust algorithms for reliable respiratory health monitoring [49,94].
Although coughs, sneezes, distress calls, and other respiratory-related sounds in broiler chickens exhibit identifiable acoustic patterns, their practical use in AI systems is constrained by substantial overlap with non-pathological sounds such as wing flapping, pecking, and environmental noise. Most studies characterize these sounds under controlled or semi-controlled conditions, which limits their external validity in commercial farms where background noise, bird density, and ventilation systems dominate the acoustic scene. Importantly, the acoustic manifestation of respiratory distress is not static but varies with age, growth rate, and environmental stressors, making fixed-rule sound definitions insufficient. Consequently, respiratory sound characterization should be viewed as a probabilistic signal embedded within complex soundscapes rather than as isolated acoustic events, reinforcing the need for robust feature learning and domain-adaptive AI models rather than reliance on handcrafted acoustic thresholds.
Different poultry sound types provide information on respiratory health, stress, growth, and welfare at varying levels of evidence. Figure 2 summarizes the relationships between major vocalization categories and their documented applications in health and welfare monitoring.
Table 3 describes the acoustic characteristics of key sound categories relevant to broiler respiratory health and welfare, including coughs, sneezes, distress calls, normal vocalizations, and silence.

5. Audio Data Acquisition in Broiler Production Systems

Audio monitoring in broiler houses predominantly relies on single omnidirectional microphones, reflecting a balance between practicality, coverage, and minimal disturbance to birds. Most studies report the use of one centrally placed microphone rather than directional devices or microphone arrays. For example, microphones positioned approximately 40 cm above the birds’ backs in the center of commercial houses have been used to record continuous audio at standard sampling rates, capturing vocalizations alongside environmental sounds such as fans and feeders [95,102]. In more task-specific applications, localized microphones have been attached to feeders to record pecking sounds, further supporting the preference for simple single-point installations over complex arrays in commercial settings [103]. Overall, omnidirectional microphones provide sufficient acoustic information for health and behavior monitoring while remaining easy to deploy and maintain [58,95,103].
Common sampling rates for animal and poultry bioacoustic monitoring typically range from 16 kHz to 48 kHz, selected according to the frequency range of target vocalizations and practical constraints. For broiler monitoring, 32 kHz and 44.1 kHz are frequently used, as they capture biologically relevant vocal frequencies while balancing data size and recording duration [95,102,104,105]. Higher rates (44.1–48 kHz) enable more detailed analysis of subtle welfare-related cues but increase storage and processing demands, whereas lower rates may reduce detection accuracy [36,106]. Recording durations vary from minutes to continuous multi-day monitoring, with audio commonly segmented into shorter clips for analysis, typically using window sizes between 1 and 10 s or aggregated segments of 10 min to 1 h [58,96,107]. Time–frequency transformations such as Fast Fourier Transform and wavelet analysis are routinely applied using standard software platforms, e.g., MATLAB (2018b, The MathWorks, Inc., Natick, MA, USA), Adobe Audition CS6, etc. to prepare signals for feature extraction [58,95,102]. Some studies integrate audio with video monitoring to link vocal patterns with observable behaviors or health status, enhancing interpretation and validation of sound-based indicators [31,95,108]. While higher stocking densities can increase overall sound energy and overlapping vocalizations due to elevated activity and stress, these conditions primarily affect signal complexity rather than feasibility of data acquisition [109,110].
Despite advances in recording hardware, annotation remains a major bottleneck in broiler audio research. Overlapping vocalizations—such as simultaneous coughs, purrs, and movement sounds—reduce labeling accuracy and complicate sound event detection. Rare events, including coughs or sneezes during early disease stages, further limit labeled data availability; active learning strategies that prioritize informative samples have been proposed to address this imbalance [111]. Additional challenges include annotator disagreement and background noise, which can bias model evaluation if uncertainty is not explicitly managed [111]. However, emerging onset-based detection methods and exploratory clustering approaches show promise for improving robustness in complex, noisy broiler house environments [112,113]. For example, machine-learning approaches combining random forests with confidence-based classification have achieved high accuracy in identifying overlapping sound degrees, reaching over 97% in some cases [93]. Table 4 provides an overview of representative poultry sound recording studies, highlighting recording environments, microphone setups, annotation strategies, and major methodological limitations.
The quality and structure of recorded audio fundamentally determine the effectiveness of downstream feature extraction. Microphone placement, sampling rate, background noise, and annotation reliability jointly constrain the spectral resolution, temporal precision, and signal-to-noise characteristics of the dataset. These factors directly influence which acoustic features can be meaningfully extracted and how reliably biological information—such as respiratory patterns or stress-related vocalizations—can be represented for machine learning.

6. Audio Feature Extraction and Representation Techniques

Feature extraction for broiler audio classification typically integrates multiple domains to capture comprehensive sound characteristics. Standard pipelines extract around 60-dimensional feature vectors per frame, including time-domain features (e.g., energy, zero-crossing rate), frequency-domain descriptors (e.g., spectral centroid, bandwidth), MFCCs, and sparse representation features [31,91,101]. Preprocessing usually involves noise reduction (e.g., Wiener filtering or wavelet denoising), followed by sub-frame segmentation and endpoint detection [47,90]. MFCCs remain the most widely used features due to their ability to capture the spectral envelope of broiler vocalizations and respiratory sounds, and are commonly combined with energy-based and wavelet descriptors to improve discrimination between normal and abnormal events such as coughs and sneezes [31,48,91,101]. Feature normalization and selection methods, including random forest importance ranking and linear–nonlinear fusion, are then applied to reduce dimensionality and improve classification performance [91,92].
Feature representation serves as the functional interface between raw audio and AI model performance. The choice of acoustic descriptors—whether handcrafted features such as MFCCs or learned representations such as spectrogram embeddings—determines the type of information available to classification algorithms and strongly shapes model complexity, generalization capacity, and computational demands.
With the expansion of deep learning, spectrogram and Mel-spectrogram representations have become dominant, as they preserve both temporal and spectral dynamics while enabling image-based learning. CNNs trained on these representations consistently outperform classical methods, achieving accuracies above 90% for cough, distress, and welfare-related sound classification [33,117,119,120,121]. Mel-spectrograms are particularly effective in noisy commercial environments because they emphasize perceptually relevant frequency bands and suppress irrelevant spectral detail [31,120,122]. Hybrid approaches that combine spectrogram-based inputs with MFCCs or time-domain statistics further improve robustness and generalization [119,123].
Figure 3 presents representative STFT spectrograms and Mel-spectrograms for healthy, environmental noise, and unhealthy broiler sounds. Healthy vocalizations show relatively stable harmonic energy primarily concentrated in the low-to-mid frequency range (approximately 500–3000 Hz), with clear temporal structure corresponding to regular vocal activity. Noise recordings exhibit more diffuse and broadband spectral energy with reduced temporal regularity, reflecting mechanical and environmental background sources. In contrast, unhealthy sounds display irregular, burst-like and vertically oriented spectral patterns, with energy distributed across a wider frequency range, which is characteristic of coughing and respiratory distress events.
Compared with standard spectrograms, Mel-spectrograms provide a more compact and perceptually meaningful representation by emphasizing lower frequency components and smoothing high-frequency detail. This highlights discriminative acoustic patterns while reducing dimensionality, explaining why Mel-spectrograms are widely adopted as input features in modern deep learning-based broiler audio monitoring systems.
Early and late fusion are two main strategies for combining multiple feature sets or modalities in multimodal audio classification. Early fusion concatenates features from different sources into a single feature vector before classification, enabling the model to learn joint representations and exploit cross-modal correlations, which often leads to higher accuracy when sufficient training data is available [124,125,126]. However, this approach increases feature dimensionality and can cause overfitting in small or noisy datasets [124,127]. In contrast, late fusion combines the outputs of separate unimodal classifiers, typically through ensemble voting or meta-classifiers, making it more robust to limited data, modality-specific noise, and missing inputs [127,128]. Comparative studies show that late fusion can outperform early fusion in practical settings, for example, achieving higher accuracy (0.876 vs. 0.828) and F1-score in aggression detection tasks, while early fusion showed higher precision [128]. Similar trends have been reported in bioacoustic applications, where late fusion improved bird sound classification and acoustic scene recognition by around 10% compared to single models, demonstrating better generalization and stability [124,129,130].

6.1. Core Acoustic Features and Their Mathematical Definitions

6.1.1. Short-Time Fourier Transform (STFT)

The Short-Time Fourier Transform (STFT) is used to analyze non-stationary animal sounds by representing signals in both time and frequency domains. For a signal x(t), STFT is defined as:
S T F T x τ , ω = x t w t τ e j ω t d t
where w(t τ ) is a window function centered at time τ, and ω is the angular frequency. STFT enables time-localized spectral analysis and is widely used to generate spectrograms for detecting coughs, sneezes, and distress calls in poultry audio monitoring [131,132,133,134].

6.1.2. Mel-Frequency Cepstral Coefficients (MFCCs)

MFCCs capture perceptually relevant spectral features of animal vocalizations. After computing the Fourier spectrum, energies are passed through Mel-scaled filters defined as:
m = 2595   log 10 1 + f 700
The MFCCs are then obtained using the discrete cosine transform:
c k = m = 1 M log ( S m ) cos π k M m 1 2
where S m is the energy of the m-th Mel filter, M is the number of filters, and k is the coefficient index. MFCCs are extensively used for poultry health, stress, and behavior classification using both classical and deep learning models [135,136,137,138].

6.1.3. Spectral Entropy

Spectral entropy measures the randomness of energy distribution across the frequency spectrum and is defined as:
H = i P f i log P f i
where P f i is the normalized power at frequency bin f i . Lower entropy indicates structured tonal sounds, while higher entropy reflects noisy or complex signals. In broiler monitoring, spectral entropy correlates with distress and welfare status and is useful for real-time assessment of flock conditions [67,139,140,141].

6.1.4. Zero-Crossing Rate (ZCR)

Zero-crossing rate quantifies how frequently a signal crosses the zero-amplitude axis. For a discrete signal x[n]:
Z C R = 1 2 N n = 1 N s g n x n s g n x n 1
ZCR reflects signal noisiness and frequency content and is used in poultry audio analysis to separate vocalizations from background noise and distinguish behavioral sounds such as feeding and distress [36,142,143,144,145].
Feature representation plays a central role in determining the accuracy, robustness, and deployability of audio-based broiler monitoring systems. Handcrafted features such as Mel-frequency cepstral coefficients (MFCCs), energy-based descriptors, and wavelet measures offer computational efficiency and partial interpretability, making them attractive for real-time and edge deployment. However, these features often fail to capture subtle temporal dynamics associated with early-stage respiratory distress and complex welfare states. Spectrogram-based representations provide richer time–frequency structure and, when combined with deep learning models, consistently improve detection performance in noisy commercial environments. Emerging self-supervised and transformer-based representation learning approaches show promise in extracting higher-level and potentially farm-invariant acoustic features, but their effectiveness remains constrained by the scarcity of large, diverse, and standardized poultry audio datasets. Table 5 summarize the commonly used acoustic features in broiler sound classification.
The selection of feature representations ultimately defines the space of AI architectures that can be effectively employed in broiler audio analysis, as different models vary in their ability to exploit temporal structure, spectral patterns, and cross-feature dependencies. Model robustness and generalization capacity, therefore, depend not only on algorithmic design but also on how acoustic information is encoded at the input level.

7. Machine Learning and Deep Learning Models

Early broiler sound classification studies primarily relied on classical machine learning models such as k-Nearest Neighbors (kNN), Random Forest (RF), Decision Trees (DT), and ensemble approaches including TrAdaBoost. These models typically operate on handcrafted features derived from time, frequency, MFCC, and sparse representations of audio signals and can achieve classification accuracies exceeding 90% with careful feature engineering and tuning [26,31,36,91,101]. Several studies show that classical machine learning (ML) methods can achieve performance comparable to deep learning on small or well-structured datasets. For example, on the University of California, Irvine—Human Activity Recognition dataset, Linear Support Vector Classifier (SVC) achieved 96% accuracy, similar to CNN performance, while Random Forest reached 92% [147]. In small-sample image classification (COREL-1000), SVM slightly outperformed CNN (0.86 vs. 0.83) [148], and in low-shot Natural Language Processing (NLP) tasks, the performance gap between classical models and deep learning shrank to less than 2% when around 1000 labeled samples per class were available [149]. In audio-related tasks, classical models using hand-crafted features (e.g., MFCCs with SVM or Random Forest) have also matched or approached deep learning accuracy in constrained datasets [150]. Importantly, classical ML offers substantially lower computational burden, faster training, and greater interpretability, making it more suitable for resource-limited and explainability-critical applications, whereas deep learning generally requires larger datasets and higher computational resources despite its superior scalability on large, complex data [151,152,153]. RF models are frequently favored for their robustness and interpretability as they aggregate many decorrelated decision trees, which reduces overfitting and improves robustness, especially in high-dimensional or partially missing data [154,155,156]. For example, on large structured datasets (e.g., Kaggle tabular data with over one million samples), Random Forest outperformed single decision trees in accuracy and generalization [157]. In terms of interpretability, Random Forest provides feature importance measures and supports rule-extraction and surrogate tree methods, allowing predictions to be explained through human-readable rules or decision paths, making it suitable for domains requiring both accuracy and transparency such as healthcare, bioinformatics, and audio classification [154,155,158]. But these approaches are limited by their reliance on manual feature engineering, sensitivity to noise, and reduced generalization across broiler ages, housing systems, and farms [26,47,70]. They also struggle with overlapping vocalizations and complex soundscapes, restricting scalability for commercial deployment [36,93].
Deep learning models address many of these limitations by learning discriminative representations directly from data. CNN-based systems are often combined with Recurrent Neural Networks (RNNs), including LSTM and GRU (Gated Recurrent Unit) architectures, to capture the temporal structure of respiratory sounds and event boundaries [159,160,161]. Temporal Convolutional Networks (TCNs) have recently emerged as computationally efficient alternatives with competitive performance [162,163]. Transformer-based architectures, such as Audio Spectrogram Transformers (AST) and wav2vec-based models, further improve performance by modeling long-range temporal and spectral dependencies through self-attention [164,165,166]. These models achieve state-of-the-art accuracy and high mean average precision (up to 0.97) for broiler stress and welfare sound classification, particularly when incorporating longer input windows and metadata such as bird age [70,167].
Across model families, class imbalance—especially the rarity of cough and sneeze events—remains a major challenge. This is addressed using oversampling, e.g., Synthetic Minority Oversampling Technique (SMOTE), Adaptive Synthetic Sampling (ADASYN), cost-sensitive loss functions, ensemble learning, and Generative Adversarial Networks (GAN) based data augmentation [168,169,170,171,172]. While SMOTE and its variants are widely used and effective for general imbalanced datasets, they perform poorly when training data is extremely scarce, as they rely on local interpolation that cannot capture true data structure [171,173]. In such small-sample scenarios, GAN-based methods like Generative Adversarial Network Synthesis for Oversampling (GANSO) and Markov Random Fields (MRF) based oversampling have been shown to generate more realistic synthetic samples and outperform SMOTE by better preserving underlying data distributions [174]. However, these methods are computationally more complex, making SMOTE preferable for moderate datasets, while GAN/MRF approaches are more suitable for very limited data conditions [171,174,175,176]. Transfer learning and domain adaptation further enhance robustness across production stages and environments, positioning deep learning—particularly transformer-based models—as the most promising direction for scalable, real-time broiler audio monitoring systems [26,139,164,166].
Model architecture critically influences the generalization and practical applicability of broiler audio analysis systems. Classical machine learning models provide strong and interpretable baselines under limited data conditions, but their performance degrades under domain shifts across farms, bird ages, and recording setups. Deep learning models, particularly convolutional and attention-based architectures, better capture hierarchical and temporal patterns in vocalizations, leading to superior accuracy in complex acoustic environments. However, these gains come at the cost of increased computational burden, reduced transparency, and challenges in explainability and reproducibility. Overall, the primary limitation in current broiler audio-AI research is not peak model accuracy but the lack of rigorous cross-domain evaluation, which makes it difficult to distinguish genuine biological signal learning from overfitting to farm-specific acoustic signatures and highlights the need for standardized benchmarks and deployment-oriented validation. Table 6 summarizes commonly applied AI models to broiler sound classification.
Beyond algorithmic performance, the practical utility of audio-based AI systems is determined by their robustness, temporal resolution, and ability to generalize across heterogeneous farm conditions, which ultimately governs whether models can move beyond laboratory benchmarks toward real-time disease surveillance, early warning, and decision-support applications in commercial broiler production.

8. Applications of Audio-AI in Respiratory Health Monitoring

Audio-based cough detection has emerged as a powerful, non-invasive tool for early disease warning in broiler production. Systems using MFCCs, sparse representations, and time–frequency features achieve classification accuracies of approximately 90–91%, while flock-level health prediction based on cough-rate estimation reaches nearly 99% accuracy under controlled conditions [31]. Transfer learning improves robustness across broiler age groups, maintaining accuracies above 80% in variable production environments [26]. High recall and precision (>90%) reported in commercial-scale facilities demonstrate feasibility under noisy, real-world conditions [177].
Acoustic monitoring enables detection of respiratory disease before visible clinical signs appear, providing a critical window for early intervention. Infectious bronchitis and Newcastle disease have been detected within days post-infection using wavelet entropy and MFCC-based approaches, with accuracies of 80–83% [30,52]. Deep learning methods further improve early-stage detection, exceeding 90% accuracy in some studies [30]. Continuous monitoring of cough frequency and rale-like sounds correlates well with disease progression and distinguishes healthy from infected birds in real time [23,24,31]. Despite these advances, generalization across breeds, farms, and disease types remains a key challenge [25,75].
In commercial settings, audio-AI systems are increasingly integrated with complementary sensing modalities. Vision-based platforms detect posture and mobility changes with accuracy of up to 97.8% [178], while radar-based sensors provide contactless monitoring of respiratory and cardiac activity with reported accuracies around 96% [179]. These systems are often embedded in IoT frameworks, incorporating environmental parameters such as temperature, humidity, and gas concentrations [180]. However, false alarms driven by environmental noise, overlapping sounds, and age-related vocal variability remain a major limitation [48,181]. Improving specificity and reducing unnecessary alerts are therefore essential for widespread adoption [21,182].
Audio-based AI systems for respiratory health detection in broiler chickens demonstrate strong potential for early, non-invasive disease surveillance, often identifying abnormal respiratory events before overt clinical signs appear. Nevertheless, most reported performances are derived from binary or narrowly defined classification tasks conducted within single experimental settings. Such designs limit the systems’ ability to generalize across heterogeneous production environments, pathogen profiles, and management practices. Moreover, the majority of studies focus on detection accuracy while overlooking false-positive burdens, which can undermine farmer trust and practical usability. From a translational standpoint, respiratory sound detection should be integrated with contextual metadata—such as age, temperature, ventilation rate, and stocking density—to move from event detection toward actionable health decision support. Table 7 summarizes sound-based AI approaches for poultry respiratory disease detection and early warning.

9. Audio-Based Welfare and Stress Monitoring

Stress and welfare-related conditions significantly alter broiler vocalization patterns by affecting call rate, frequency, and acoustic structure. Acute stressors such as food or water deprivation increase vocal activity and modify spectral features, including centroid and bandwidth, while mitigation strategies such as hydrated gels reduce these responses [68]. Prolonged or intense stress is associated with increased high-energy distress calls characterized by reduced spectral entropy, which correlates with poorer welfare outcomes, including reduced growth and increased mortality [67]. Vocal responses are strongly age-dependent and influenced by circadian rhythms, complicating interpretation without age-aware models [55,56,88].
Acoustic analysis has proven effective for detecting distress, aggression, and discomfort. Distress calls serve as reliable biomarkers of negative welfare states, with spectral entropy providing a quantitative proxy for stress intensity [67]. Deep learning models, including CNNs and transformer-based architectures, achieve high accuracy and mean average precision (up to 97%) in classifying stress-related vocalizations and differentiating between stressors and age groups [36,70,96]. Classical models such as SVMs also show moderate to high performance, although results are often age-dependent [184]. Emerging large-scale audio models capable of decoding emotional and physiological cues further highlight the potential of sound-based, non-invasive welfare monitoring, although most validation remains limited to controlled or semi-commercial conditions [98,118,139,185].
Environmental conditions strongly shape welfare-related vocalizations. Suboptimal environments consistently increase distress calling and are associated with poorer growth and welfare indicators [67]. Reduced stocking density and increased environmental complexity generally lower fear and anxiety responses, while the effects of enrichment, heat stress, and housing design vary with age and stimulus type [71,186,187,188,189]. Vocal behavior is also influenced by genotype and breed, suggesting the need for adaptive, population-aware models [58,107,190,191,192]. Together, these findings support acoustic monitoring as a sensitive, age-aware tool for broiler welfare assessment under diverse production conditions.
While vocalizations and activity-related sounds offer valuable insights into broiler welfare, their interpretation through AI models remains inherently ambiguous due to the multifactorial nature of stress and discomfort. Similar acoustic patterns may arise from thermal stress, social interactions, or environmental disturbances, complicating one-to-one mappings between sound events and welfare states. Current AI approaches often infer welfare indirectly by associating sound frequency or intensity with predefined stress labels, which risks oversimplification. For welfare-sensitive applications, model transparency and explainability become particularly critical, as automated alerts may influence management interventions. Therefore, audio-based welfare assessment should be framed as a supportive, probabilistic indicator rather than a deterministic diagnostic tool, ideally complemented by multimodal sensing and expert validation. Table 8 links welfare indicators with characteristic sound patterns and AI-based monitoring approaches.
Figure 4 illustrates the dominant feature–model–task relationships reported in poultry health and welfare monitoring literature.

10. Evaluation Metrics and Real-World Deployment Performance

A diverse set of metrics is used to evaluate broiler sound classification systems, reflecting variation in task objectives and model designs. Commonly reported metrics include classification accuracy, recognition accuracy, F1-score, mean average precision (mAP), signal-to-noise ratio (SNR), and root mean square error (RMSE). Reported accuracies typically range from 88% to over 94%, with optimized classical models such as kNN and Random Forest frequently exceeding 90% accuracy [31,91,101]. Recognition accuracy can reach up to 99% when majority voting is applied to frame-level predictions [91,101], while deep learning–based multi-class stress detection reports mAP values as high as 0.97, indicating a strong balance between precision and recall [70]. Signal-quality metrics such as SNR and RMSE are widely used to assess preprocessing effectiveness, as improved noise suppression directly enhances downstream classification performance [47,90]. Additional indicators, including cough rate and confidence intervals for overlapping sound recognition, have been proposed to quantify health-related vocal activity in complex acoustic environments [31,93].
Despite these high reported values, accuracy alone is insufficient for real-world deployment in commercial poultry houses. Farm environments are characterized by high background noise, environmental variability, sensor failures, and data irregularities such as missing values and outliers, all of which can degrade system reliability [193]. Moreover, accuracy does not reflect performance for rare but welfare-critical events, alert timeliness, robustness across production cycles, or system usability—factors essential for farmer trust and effective decision-making [193,194,195,196]. Consequently, broader evaluation frameworks incorporating robustness, scalability, and operational relevance are required for practical deployment.
Rare event detection, including abnormal coughing or distress calls, is more appropriately evaluated using recall, F1-score, and event-based mAP rather than frame-level accuracy. Recall is particularly critical in health monitoring, as missed detections can have serious welfare and economic consequences [48]. F1-scores of approximately 94% have been reported for rare cough detection using wavelet-based features and hidden Markov models under controlled conditions [48]. Event-based mAP provides a more realistic assessment of system effectiveness in noisy and overlapping acoustic settings [197]. Noise reduction quality remains a key determinant of rare-event detection performance, directly influencing recall and F1-score [90].
Real-time deployment introduces additional constraints related to hardware capacity, computational efficiency, and latency. Lightweight deep learning architectures optimized for edge deployment have demonstrated real-time inference with latencies below 200 ms on embedded systems, confirming feasibility for on-site monitoring in commercial poultry environments [195,198,199]. Low-latency processing is essential for timely intervention following health or welfare anomalies [200,201]. However, environmental noise, connectivity limitations, and hardware–software integration challenges continue to limit large-scale adoption, highlighting the need for careful co-design of models, hardware, and data pipelines. Table 9 summarizes commonly reported evaluation metrics and their suitability for commercial deployment.

11. Key AI Challenges and Limitations in Broiler Audio Monitoring

A major limitation of current audio-based AI systems for poultry health and welfare monitoring is pervasive dataset bias, which directly undermines robustness and real-world applicability. Most datasets lack standardization in recording protocols, annotation criteria, and evaluation metrics, making cross-study comparison difficult and often misleading [36,139]. Data are frequently collected from limited acoustic environments, breeds, housing systems, and management conditions, resulting in models that perform well in controlled or single-farm settings but fail to generalize elsewhere [139,206]. Class imbalance further exacerbates this issue, as health- or stress-related sounds such as coughs, sneezes, and distress calls occur far less frequently than background noise, biasing models toward majority classes and inflating headline accuracy metrics [172,207]. Additionally, many datasets underrepresent the acoustic complexity of commercial farms, where overlapping sounds from ventilation systems, feeders, and animal activity differ substantially from experimental conditions [139]. Together, these factors indicate that reported performance often reflects dataset-specific characteristics rather than true predictive capability.
Closely related to dataset bias is the widespread reliance on single-farm or single-flock data, which limits deployment readiness. Models trained on a single production site tend to learn farm-specific acoustic signatures related to microphone placement, housing reverberation, and management routines, leading to substantial performance degradation when evaluated across farms [206,208,209]. In poultry systems, domain shift is further driven by age-dependent vocal behavior, housing design, ventilation regimes, and potential breed-specific physiological differences [56,58,139]. Vocalization frequency and structure change markedly as broilers grow, meaning age imbalance can inadvertently confound health and welfare predictions [58,108]. Addressing these challenges requires multi-farm datasets, explicit cross-environment validation, and the application of domain adaptation, data augmentation, and noise-robust training strategies to improve generalization across production contexts [25,36,210].
A fundamental structural limitation in the field is the absence of standardized benchmark datasets and shared evaluation frameworks for poultry audio AI. Unlike computer vision or human speech recognition, poultry audio research remains fragmented, with most studies relying on proprietary or newly collected datasets tailored to specific experiments [25,36,96]. This lack of shared benchmarks hampers reproducibility, prevents fair comparison between algorithms, and slows cumulative methodological progress [139,211]. Although some publicly available datasets and open-source tools exist, their scope is limited, and poultry-specific large-scale benchmarks remain scarce [25,36,96]. Inconsistent public release of datasets and code further restricts transparency and independent validation [33,36]. Establishing open, multi-environment benchmark datasets with harmonized evaluation protocols is therefore critical to advancing reproducibility, comparability, and real-world trust in poultry audio AI systems [211,212,213]. Beyond data-related issues, interpretability and deployment constraints pose significant barriers to responsible adoption, particularly for welfare-critical decisions. Interpretable models are essential for building trust among farmers, veterinarians, and regulators, as they enable AI outputs to be linked to biologically meaningful vocal features and known behavioral or physiological mechanisms [36,214]. In contrast, black-box models—despite high predictive accuracy—introduce risks related to accountability, undetected errors, and inappropriate interventions in variable farm environments [206,215,216,217,218]. These concerns are compounded by edge deployment constraints, as commercial poultry houses require low-power, continuously operating systems. Model compression techniques such as pruning, quantization, and knowledge distillation enable edge deployment but often involve trade-offs between accuracy and efficiency [219,220,221,222,223,224,225,226]. Future poultry audio AI systems must therefore balance interpretability, robustness, and computational feasibility to ensure ethical, transparent, and scalable deployment within precision livestock farming.

12. Research Gaps and Future Directions

Building on the systemic limitations discussed in Section 11, several task-specific research gaps remain in broiler audio-AI. In particular, subtle respiratory sounds such as sneezing, rales, and low-intensity distress calls remain under-explored because they are infrequent, weak in amplitude, and difficult to separate from background noise in commercial farm environments [48,49]. Their rarity (e.g., only 0.24% of recorded sounds in one study) leads to limited labeled data and reduced detection sensitivity, even when precision is high [49]. Consequently, most studies prioritize more salient vocalizations such as coughing, which are easier to detect and more directly linked to respiratory disease [26,31,48]. The difficulty of separating subtle respiratory events from background noise, combined with age- and farm-dependent vocal variability, further increases the need for large, diverse datasets and advanced signal processing techniques, limiting broader investigation of these sounds [48,49].
Most existing broiler audio-AI systems also rely on binary rather than multi-class classification due to practical constraints in data availability, labeling effort, and computational complexity. Binary models targeting a single event (e.g., cough vs. non-cough) simplify annotation and training while maintaining higher robustness and accuracy under variable farm conditions [31,117]. In contrast, reliable multi-class classification requires extensive, well-annotated datasets covering diverse vocalizations and noise sources, which are costly to produce and difficult to generalize across broiler ages and environments [26,31]. Although recent studies demonstrate the feasibility of multi-class stress and sound-type classification, these models often face trade-offs in complexity, generalization, and suitability for low-power edge deployment [33,117], with cross-farm transferability remaining weak due to differences in acoustics, management practices, and housing systems [36,139,206].
Furthermore, most studies report only single-run performance metrics without confidence intervals or repeated evaluations. This prevents rigorous assessment of model stability and reproducibility, as performance variance across random initializations, train–test splits, or cross-validation folds is rarely quantified. Consequently, reported accuracies may overestimate real-world performance stability. Limiting factors in poultry audio-based AI monitoring are summarized in Figure 5.
Emerging learning paradigms offer promising directions to address current limitations in poultry audio-AI systems. Self-supervised learning enables models to learn robust audio representations from large volumes of unlabeled data, reducing dependence on manual annotation and improving downstream performance under limited labeled conditions [227,228]. Multimodal learning further enhances robustness by fusing audio with video and environmental sensor data, mitigating noise, variability, and farm-specific effects while improving generalization [229]. Such multimodal AI systems capture the multidimensional nature of animal welfare more effectively than unimodal approaches, providing deeper insights into behavior, health status, and environmental stressors [139,230,231]. Preliminary exploratory studies report that feature-level fusion strategies demonstrate superior robustness and scalability in real-world poultry farm conditions, while livestock studies report welfare and disease prediction accuracies exceeding 90%, supporting proactive intervention and improved management efficiency [230,231,232]. IoT-based multi-sensor fusion also improve anomaly detection and predictive analytics, with reported gains of 25% and 30% in health metric accuracy and sensor noise reduction, respectively [232]. Combined with domain adaptation, these approaches show strong potential for improving cross-farm performance [26,70]. Future research should adopt repeated experimental protocols and report mean performance with standard deviation or confidence intervals to ensure robust comparison, reproducibility, and deployment reliability. At the deployment level, edge-AI enables real-time, scalable application through low-latency on-site inference using efficient TinyML and computer vision models, supporting practical health, welfare, and environmental monitoring in commercial broiler houses [202,233,234,235]. Collectively, advances in self-supervised, multimodal, and edge-AI frameworks are expected to drive the next generation of farm-ready audio-AI systems for broiler production (Figure 6).

13. Conclusions

Audio-AI has already demonstrated strong capabilities in broiler monitoring by accurately detecting and classifying vocalizations related to health and welfare, such as coughs and distress calls, achieving classification accuracies often above 90% using machine learning models like Random Forest, SVM, and CNNs. These systems enable non-invasive, real-time health assessment and early disease detection, providing actionable insights for timely intervention and improved animal welfare. However, large-scale adoption faces limitations including variability in vocalizations across ages and environments, lack of standardized datasets and evaluation protocols, computational constraints for continuous monitoring, and economic barriers such as high initial costs and uncertain returns, especially in resource-limited settings. Future AI systems can improve respiratory health and welfare by integrating multimodal sensor data (audio, video, environmental), employing transfer learning and domain adaptation to enhance generalization, deploying edge-AI for low-latency and scalable monitoring, and emphasizing explainability to build trust among stakeholders. This field is critical for sustainable poultry production because it supports early disease detection, reduces labor and resource use, enhances animal welfare, and lowers environmental impacts through precision management, thereby contributing to ethical, efficient, and resilient food systems. Continued research and development focused on robustness, cost-effectiveness, and standardization will be essential to realize the full potential of audio-AI in commercial broiler houses.

Author Contributions

Conceptualization, M.S., H.-S.M., Y.-H.K. and C.-J.Y.; writing—original draft preparation, M.S., H.-S.M., Y.-H.K., E.B.L. and M.K.H.; writing—review and editing, M.S., H.-S.M., E.B.L., M.K.H., J.-G.K., Y.-H.K., A.M., H.-R.P. and C.-J.Y.; visualization, M.S. and H.-S.M.; supervision, C.-J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were generated during this study.

Acknowledgments

This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry (IPET) and Korea Smart Farm R&D Foundation through Smart Farm Innovation Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA) and Ministry of Science and ICT (MSIT) and Rural Development Administration (RDA) (RS-2025-02216184).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
IBVInfectious Bronchitis Virus
NDVNewcastle Disease Virus
aMPVAvian Metapneumovirus
ILTVInfectious Laryngotracheitis Virus
PCRPolymerase Chain Reaction
LAMPLoop-Mediated Isothermal Amplification
PLFPrecision Livestock Farming
MFCCsMel-Frequency Cepstral Coefficients
CNNConvolutional Neural Network
LSTMLong Short-Term Memory Network
STFTShort-Time Fourier Transform
AIVAvian Influenza Virus
APVAvian Pneumovirus
NH3Ammonia
FCRFeed Conversion Ratio
ANNArtificial Neural Network
MLMachine Learning
dBDecibel
PNPleasure Notes
CO2Carbon Dioxide
ZCRZero-Crossing Rate
kNNk-Nearest Neighbors
RFRandom Forest
DTDecision Trees
SVCSupport Vector Classifier
SVMSupport Vector Machine
NLPNatural Language Processing
RNNRecurrent Neural Networks
GRUGated Recurrent Unit
TCNTemporal Convolutional Network
ASTAudio Spectrogram Transformers
SMOTESynthetic Minority Oversampling Technique
ADASYNAdaptive Synthetic Sampling
GANGenerative Adversarial Networks
GANSOGenerative Adversarial Network Synthesis for Oversampling
MRFMarkov Random Fields
ELMExtreme Learning Machine
DLDeep Learning
WMFCCWavelet-based Mel-Frequency Cepstral Coefficients
HMMHidden Markov Model
MCCMatthews Correlation Coefficient
mAPMean Average Precision
NDNewcastle Disease
IBInfectious Bronchitis
FFTFast Fourier Transform
DWTDiscrete Wavelet Transform
TDOATime Difference of Arrival
SNRSignal-to-Noise Ratio
RMSERoot Mean Square Error

References

  1. Regragui, R.; Arbani, O.; Touil, N.; Bouzoubaa, K.; Oukessou, M.; El Houadfi, M.; Fellahi, S. Surveillance and Coinfection Dynamics of Infectious Bronchitis Virus and Avian Influenza H9N2 in Moroccan Broiler Farms (2021–2023): Phylogenetic Insights and Impact on Poultry Health. Viruses 2025, 17, 786. [Google Scholar] [CrossRef] [PubMed]
  2. Roussan, D.A.; Haddad, R.; Khawaldeh, G. Molecular Survey of Avian Respiratory Pathogens in Commercial Broiler Chicken Flocks with Respiratory Diseases in Jordan. Poult. Sci. 2008, 87, 444–448. [Google Scholar] [CrossRef]
  3. Liu, H.; Pan, S.; Wang, C.; Yang, W.; Wei, X.; He, Y.; Xu, T.; Shi, K.; Si, H. Review of Respiratory Syndromes in Poultry: Pathogens, Prevention, and Control Measures. Vet. Res. 2025, 56, 101. [Google Scholar] [CrossRef] [PubMed]
  4. Abdelaziz, A.M.; Mohamed, M.H.A.; Fayez, M.M.; Al-Marri, T.; Qasim, I.; Al-Amer, A.A. Molecular Survey and Interaction of Common Respiratory Pathogens in Chicken Flocks (Field Perspective). Vet. World 2019, 12, 1975–1986. [Google Scholar] [CrossRef]
  5. Gickel, J.; Hankel, J.; El-Wahab, A.A.; Berenike Arnold, C.; Visscher, C. The Effect of Pathogen-Induced Diseases on the Carbon Footprint of Broiler Chickens. Poult. Sci. 2025, 104, 105817. [Google Scholar] [CrossRef]
  6. Pan, Q.; Liu, A.; Zhang, F.; Ling, Y.; Ou, C.; Hou, N.; He, C. Co-Infection of Broilers with Ornithobacterium rhinotracheale and H9N2 Avian Influenza Virus. BMC Vet. Res. 2012, 8, 104. [Google Scholar] [CrossRef]
  7. Yehia, N.; Salem, H.M.; Mahmmod, Y.; Said, D.; Samir, M.; Mawgod, S.A.; Sorour, H.K.; AbdelRahman, M.A.A.; Selim, S.; Saad, A.M.; et al. Common Viral and Bacterial Avian Respiratory Infections: An Updated Review. Poult. Sci. 2023, 102, 102553. [Google Scholar] [CrossRef]
  8. Kamaruzaman, I.N.A.; Ng, K.Y.; Hamdan, R.H.; Shaharulnizim, N.; Zalati, C.W.S.C.W.; Mohamed, M.; Nordin, M.L.; Rajdi, N.Z.I.M.; Abu-Bakar, L.; Reduan, M.F.H. Complex Chronic Respiratory Disease Concurrent with Coccidiosis in Broiler Chickens in Malaysia: A Case Report. J. Adv. Vet. Anim. Res. 2021, 8, 576–580. [Google Scholar] [CrossRef] [PubMed]
  9. Bashashati, M.; Shojaei, M.; Sabouri, F. Pathogenic Bacteria Associated with Outbreaks of Respiratory Disease in Iranian Broiler Farms. Vet. Med. Sci. 2023, 9, 1675–1684. [Google Scholar] [CrossRef]
  10. Ellakany, H.F.; Elbestawy, A.R.; Abd-Elhamid, H.S.; Gado, A.R.; Nassar, A.A.; Abdel-Latif, M.A.; Ghanima, I.I.A.; Abd El-Hack, M.E.; Swelum, A.A.; Saadeldin, I.M.; et al. Effect of Experimental Ornithobacterium rhinotracheale Infection along with Live Infectious Bronchitis Vaccination in Broiler Chickens. Poult. Sci. 2019, 98, 105–111. [Google Scholar] [CrossRef]
  11. Sajnani, M.R.; Sudarsanam, D.; Pandit, R.J.; Oza, T.; Hinsu, A.T.; Jakhesara, S.J.; Solosanc, S.; Joshi, C.G.; Bhatt, V.D. Metagenomic Data of DNA Viruses of Poultry Affected with Respiratory Tract Infection. Data Brief 2018, 16, 157–160. [Google Scholar] [CrossRef]
  12. Mehrabadi, M.H.F.; Ghalyanchilangeroudi, A.; Tehrani, F.; Hajloo, S.A.; Bashashati, M.; Bahonar, A.R.; Pourjafar, H.; Ansari, F. Assessing the Economic Burden of Multi-Causal Respiratory Diseases in Broiler Farms in Iran. Trop. Anim. Health Prod. 2022, 54, 117. [Google Scholar] [CrossRef]
  13. Hassan, K.E.; Shany, S.A.S.; Ali, A.; Dahshan, A.-H.M.; El-Sawah, A.A.; El-Kady, M.F. Prevalence of Avian Respiratory Viruses in Broiler Flocks in Egypt. Poult. Sci. 2016, 95, 1271–1280. [Google Scholar] [CrossRef]
  14. Maletić, J.; Jezdimirović, N.; Spalević, L.; Milovanović, B.; Vasić, A.; Kureljušić, J.; Kureljušić, B. Pathological and Molecular Investigation of Infectious Bronchitis in Broilers: Analyzing the Impact of Biosecurity Lapses. Front. Vet. Sci. 2025, 12, 1548248. [Google Scholar] [CrossRef]
  15. Uddin, M.M.; Hasan, A.; Hossain, I.; Helal, S.B.; Begum, J.A.; Dauphin, G.; Chowdhury, E.H.; Parvin, R. Molecular Detection and Epidemiological Distribution of Poultry Respiratory Viral Pathogens in Commercial Chicken Flocks in Bangladesh. Poult. Sci. 2025, 104, 104679. [Google Scholar] [CrossRef]
  16. Parvin, R.; Kabiraj, C.K.; Hossain, I.; Hassan, A.; Begum, J.A.; Nooruzzaman, M.; Islam, M.T.; Chowdhury, E.H. Investigation of Respiratory Disease Outbreaks of Poultry in Bangladesh Using Two Real-Time PCR-Based Simultaneous Detection Assays. Front. Vet. Sci. 2022, 9, 1036757. [Google Scholar] [CrossRef]
  17. El-Tholoth, M.; Bau, H.H. Molecular Detection of Respiratory Tract Viruses in Chickens at the Point of Need by Loop-Mediated Isothermal Amplification (LAMP). Viruses 2024, 16, 1248. [Google Scholar] [CrossRef] [PubMed]
  18. Enyetornye, B.; Yondo, A.; Velayudhan, B.T. Point-of-Care Diagnostic Testing for Emerging and Existing Poultry Viral Respiratory Pathogens Using Loop-Mediated Isothermal Amplification. Pathogens 2025, 14, 657. [Google Scholar] [CrossRef] [PubMed]
  19. Jbenyeni, A.; Croville, G.; Cazaban, C.; Guérin, J.-L. Predominance of Low Pathogenic Avian Influenza Virus H9N2 in the Respiratory Co-Infections in Broilers in Tunisia: A Longitudinal Field Study, 2018–2020. Vet. Res. 2023, 54, 88. [Google Scholar] [CrossRef] [PubMed]
  20. Kamel, M.S.; Davidson, J.L.; Verma, M.S. Strategies for Bovine Respiratory Disease (BRD) Diagnosis and Prognosis: A Comprehensive Overview. Animals 2024, 14, 627. [Google Scholar] [CrossRef]
  21. He, P.; Chen, Z.; Yu, H.; Hayat, K.; He, Y.; Pan, J.; Lin, H. Research Progress in the Early Warning of Chicken Diseases by Monitoring Clinical Symptoms. Appl. Sci. 2022, 12, 5601. [Google Scholar] [CrossRef]
  22. Juge, A.E.; Cooke, R.F.; Ceja, G.; Matt, M.; Daigle, C.L. Comparison of Physiological Markers, Behavior Monitoring, and Clinical Illness Scoring as Indicators of an Inflammatory Response in Beef Cattle. PLoS ONE 2024, 19, e0302172. [Google Scholar] [CrossRef] [PubMed]
  23. Carroll, B.T.; Anderson, D.V.; Daley, W.; Harbert, S.; Britton, D.F.; Jackwood, M.W. Detecting Symptoms of Diseases in Poultry through Audio Signal Processing. In Proceedings of the 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta, GA, USA, 3–5 December 2014; pp. 1132–1135. [Google Scholar]
  24. Rizwan, M.; Carroll, B.T.; Anderson, D.V.; Daley, W.; Harbert, S.; Britton, D.F.; Jackwood, M.W. Identifying Rale Sounds in Chickens Using Audio Signals for Early Disease Detection in Poultry. In Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Washington, DC, USA, 7–9 December 2016; pp. 55–59. [Google Scholar]
  25. Adebayo, S.; Aworinde, H.O.; Akinwunmi, A.O.; Alabi, O.M.; Ayandiji, A.; Sakpere, A.B.; Adeyemo, A.; Oyebamiji, A.K.; Olaide, O.; Kizito, E. Enhancing Poultry Health Management through Machine Learning-Based Analysis of Vocalization Signals Dataset. Data Brief 2023, 50, 109528. [Google Scholar] [CrossRef]
  26. Sun, Z.; Zhang, M.; Liu, J.; Wang, J.; Wu, Q.; Wang, G. Research on White Feather Broiler Health Monitoring Method Based on Sound Detection and Transfer Learning. Comput. Electron. Agric. 2023, 214, 108319. [Google Scholar] [CrossRef]
  27. Lagua, E.B.; Mun, H.-S.; Ampode, K.M.B.; Chem, V.; Kim, Y.-H.; Yang, C.-J. Artificial Intelligence for Automatic Monitoring of Respiratory Health Conditions in Smart Swine Farming. Animals 2023, 13, 1860. [Google Scholar] [CrossRef]
  28. Puig, A.; Ruiz, M.; Bassols, M.; Fraile, L.; Armengol, R. Technological Tools for the Early Detection of Bovine Respiratory Disease in Farms. Animals 2022, 12, 2623. [Google Scholar] [CrossRef]
  29. Huang, J.; Wang, W.; Zhang, T. Method for Detecting Avian Influenza Disease of Chickens Based on Sound Analysis. Biosyst. Eng. 2019, 180, 16–24. [Google Scholar] [CrossRef]
  30. Cuan, K.; Zhang, T.; Li, Z.; Huang, J.; Ding, Y.; Fang, C. Automatic Newcastle Disease Detection Using Sound Technology and Deep Learning Method. Comput. Electron. Agric. 2022, 194, 106740. [Google Scholar] [CrossRef]
  31. Sun, Z.; Tao, W.; Gao, M.; Zhang, M.; Song, S.; Wang, G. Broiler Health Monitoring Technology Based on Sound Features and Random Forest. Eng. Appl. Artif. Intell. 2024, 135, 108849. [Google Scholar] [CrossRef]
  32. Garrido, L.F.C.; Sato, S.T.M.; Costa, L.B.; Daros, R. Can We Reliably Detect Respiratory Diseases through Precision Farming? A Systematic Review. Anim. Open Access J. 2023, 13, 1273. [Google Scholar] [CrossRef] [PubMed]
  33. Hassan, E.; Elbedwehy, S.; Shams, M.Y.; Abd El-Hafeez, T.; El-Rashidy, N. Optimizing Poultry Audio Signal Classification with Deep Learning and Burn Layer Fusion. J. Big Data 2024, 11, 135. [Google Scholar] [CrossRef]
  34. Srivastava, A.; Jain, S.; Miranda, R.; Patil, S.; Pandya, S.; Kotecha, K. Deep Learning Based Respiratory Sound Analysis for Detection of Chronic Obstructive Pulmonary Disease. PeerJ Comput. Sci. 2021, 7, e369. [Google Scholar] [CrossRef]
  35. Alghamdi, N.S.; Zakariah, M.; Karamti, H. A Deep CNN-Based Acoustic Model for the Identification of Lung Diseases Utilizing Extracted MFCC Features from Respiratory Sounds. Multimed. Tools Appl. 2024, 83, 82871–82903. [Google Scholar] [CrossRef]
  36. Manikandan, V.; Neethirajan, S. Decoding Poultry Welfare from Sound—A Machine Learning Framework for Non-Invasive Acoustic Monitoring. Sensors 2025, 25, 2912. [Google Scholar] [CrossRef] [PubMed]
  37. Sfayyih, A.H.; Sulaiman, N.; Sabry, A.H. A Review on Lung Disease Recognition by Acoustic Signal Analysis with Deep Learning Networks. J. Big Data 2023, 10, 101. [Google Scholar] [CrossRef] [PubMed]
  38. Karaarslan, O.; Belcastro, K.D.; Ergen, O. Respiratory Sound-Base Disease Classification and Characterization with Deep/Machine Learning Techniques. Biomed. Signal Process. Control 2024, 87, 105570. [Google Scholar] [CrossRef]
  39. Kapetanidis, P.; Kalioras, F.; Tsakonas, C.; Tzamalis, P.; Kontogiannis, G.; Karamanidou, T.; Stavropoulos, T.G.; Nikoletseas, S. Respiratory Diseases Diagnosis Using Audio Analysis and Artificial Intelligence: A Systematic Review. Sensors 2024, 24, 1173. [Google Scholar] [CrossRef]
  40. Sabry, A.H.; Dallal Bashi, O.I.; Nik Ali, N.H.; Mahmood Al Kubaisi, Y. Lung Disease Recognition Methods Using Audio-Based Analysis with Machine Learning. Heliyon 2024, 10, e26218. [Google Scholar] [CrossRef]
  41. Rivas-Navarrete, J.A.; Pérez-Espinosa, H.; Padilla-Ortiz, A.L.; Rodríguez-González, A.Y.; García-Cambero, D.C. Edge Computing System for Automatic Detection of Chronic Respiratory Diseases Using Audio Analysis. J. Med. Syst. 2025, 49, 33. [Google Scholar] [CrossRef]
  42. Landry, V.; Matschek, J.; Pang, R.; Munipalle, M.; Tan, K.; Boruff, J.; Li-Jessen, N.Y.K. Audio-Based Digital Biomarkers in Diagnosing and Managing Respiratory Diseases: A Systematic Review and Bibliometric Analysis. Eur. Respir. Rev. 2025, 34, 240246. [Google Scholar] [CrossRef]
  43. Aworinde, H.; Adebayo, S.; Akinwunmi, A.; Alabi, O.; Ayandiji, A.; Oke, O.; Oyebamiji, A.; Adeyemo, A.; Sakpere, A.; Echetama, K. Poultry Vocalization Signal Dataset for Early Disease Detection. Mendeley Data 2023, 1. [Google Scholar] [CrossRef]
  44. Al-Natour, M.Q.; Rohaim, M.A.; El Naggar, R.F.; Abdelsabour, M.A.; Afify, A.F.; Madbouly, Y.M.; Munir, M. Respiratory Disease Complex Due to Mixed Viral Infections in Chicken in Jordan. Poult. Sci. 2024, 103, 103565. [Google Scholar] [CrossRef] [PubMed]
  45. Mehrabadi, M.F.; Moluoki, A.; Bashashati, M.; Rabiee, M.H.; Ghalyanchilangeroudi, A.; Shoushtari, A.; Motamed, N. Prevalence and Risk Factors of Viral Diseases in Broiler Farms Contracted to Multifactorial Respiratory Syndrome. J. Hell. Vet. Med. Soc. 2025, 76, 8637–8646. [Google Scholar] [CrossRef]
  46. Villegas, P. Viral Diseases of the Respiratory System. Poult. Sci. 1998, 77, 1143–1145. [Google Scholar] [CrossRef]
  47. Sun, Z.; Gao, M.; Wang, G.; Lv, B.; He, C.; Teng, Y.; Sun, Z.; Gao, M.; Wang, G.; Lv, B.; et al. Research on Evaluating the Filtering Method for Broiler Sound Signal from Multiple Perspectives. Animals 2021, 11, 2238. [Google Scholar] [CrossRef]
  48. Liu, L.; Li, B.; Zhao, R.; Yao, W.; Shen, M.; Yang, J. A Novel Method for Broiler Abnormal Sound Detection Using WMFCC and HMM. J. Sens. 2020, 2020, 2985478. [Google Scholar] [CrossRef]
  49. Carpentier, L.; Vranken, E.; Berckmans, D.; Paeshuyse, J.; Norton, T. Development of Sound-Based Poultry Health Monitoring Tool for Automated Sneeze Detection. Comput. Electron. Agric. 2019, 162, 573–581. [Google Scholar] [CrossRef]
  50. Jin, Y.; Liu, Z.; Hu, C.; Dong, Z.; Rong, R.; Liu, H.; Liang, Z.; Liu, J.; Chen, L.; Huang, M.; et al. Study on the Flow Mechanism and Frequency Characteristics of Rales in Lower Respiratory Tract. Biomech. Model. Mechanobiol. 2024, 23, 227–239. [Google Scholar] [CrossRef]
  51. Mohamed, M.; Yassmina, B.; Rim, R.; Mouna, E.K.; Siham, F. First Detection of D181 Genotype of Infectious Bronchitis in Poultry Flocks of Morocco. Virol. J. 2024, 21, 267. [Google Scholar] [CrossRef]
  52. Mahdavian, A.; Minaei, S.; Marchetto, P.M.; Almasganj, F.; Rahimi, S.; Yang, C. Acoustic Features of Vocalization Signal in Poultry Health Monitoring. Appl. Acoust. 2021, 175, 107756. [Google Scholar] [CrossRef]
  53. Pereira, E.; Nääs, I.d.A.; Ivale, A.H.; Garcia, R.G.; Lima, N.D.d.S.; Pereira, D.F.; Pereira, E.; Nääs, I.d.A.; Ivale, A.H.; Garcia, R.G.; et al. Energy Assessment from Broiler Chicks’ Vocalization Might Help Improve Welfare and Production. Animals 2022, 13, 15. [Google Scholar] [CrossRef]
  54. Oso, O.M.; Mejia-Abaunza, N.; Bodempudi, V.U.C.; Chen, X.; Chen, C.; Aggrey, S.E.; Li, G. Automatic Analysis of High, Medium, and Low Activities of Broilers with Heat Stress Operations via Image Processing and Machine Learning. Poult. Sci. 2025, 104, 104954. [Google Scholar] [CrossRef] [PubMed]
  55. Thomas, P.; Grzywalski, T.; Hou, Y.; Soster de Carvalho, P.; De Gussem, M.; Antonissen, G.; Tuyttens, F.; De Poorter, E.; Devos, P.; Botteldooren, D. Broiler Chicken Vocalization Analysis during a Medium-Scale Heat Stress Experiment. In 2024 INTER-NOISE and NOISE-CON Congress and Conference Proceedings; Institute of Noise Control Engineering: Wakefield, MA, USA, 2024; Volume 270, pp. 9537–9548. [Google Scholar] [CrossRef]
  56. Soster de Carvalho, P.; Grzywalski, T.; Buyse, K.; Thomas, P.; Carvalho, C.L.; Khan, I.; Khalfi, B.; Tuyttens, F.; De Gussem, M.; Devos, P.; et al. Influence of Age, Time of Day, and Environmental Changes on Vocalization Patterns in Broiler Chickens. Poult. Sci. 2025, 104, 105298. [Google Scholar] [CrossRef] [PubMed]
  57. Tucciarone, C.M.; Franzo, G.; Lupini, C.; Alejo, C.T.; Listorti, V.; Mescolini, G.; Brandão, P.E.; Martini, M.; Catelli, E.; Cecchinato, M. Avian Metapneumovirus Circulation in Italian Broiler Farms. Poult. Sci. 2018, 97, 503–509. [Google Scholar] [CrossRef]
  58. Fontana, I.; Tullo, E.; Scrase, A.; Butterworth, A. Vocalisation Sound Pattern Identification in Young Broiler Chickens. Animal 2016, 10, 1567–1574. [Google Scholar] [CrossRef]
  59. Chen, J.; Jin, A.; Huang, L.; Zhao, Y.; Li, Y.; Zhang, H.; Yang, X.; Sun, Q. Dynamic Changes in Lung Microbiota of Broilers in Response to Aging and Ammonia Stress. Front. Microbiol. 2021, 12, 696913. [Google Scholar] [CrossRef]
  60. Shen, D.; Wang, K.; Fathi, M.A.; Li, Y.; Win-Shwe, T.-T.; Li, C. A Succession of Pulmonary Microbiota in Broilers during the Growth Cycle. Poult. Sci. 2023, 102, 102884. [Google Scholar] [CrossRef] [PubMed]
  61. Tickle, P.G.; Paxton, H.; Rankin, J.W.; Hutchinson, J.R.; Codd, J.R. Anatomical and Biomechanical Traits of Broiler Chickens across Ontogeny. Part I. Anatomy of the Musculoskeletal Respiratory Apparatus and Changes in Organ Size. PeerJ 2014, 2, e432. [Google Scholar] [CrossRef]
  62. Del Vesco, A.P.; Khatlab, A.S.; Goes, E.S.R.; Utsunomiya, K.S.; Vieira, J.S.; Oliveira Neto, A.R.; Gasparino, E. Age-Related Oxidative Stress and Antioxidant Capacity in Heat-Stressed Broilers. Animal 2017, 11, 1783–1790. [Google Scholar] [CrossRef]
  63. Wu, X.-Y.; Wang, F.-Y.; Chen, H.-X.; Dong, H.-L.; Zhao, Z.-Q.; Si, L.-F. Chronic Heat Stress Induces Lung Injury in Broiler Chickens by Disrupting the Pulmonary Blood-Air Barrier and Activating TLRs/NF-κB Signaling Pathway. Poult. Sci. 2023, 102, 103066. [Google Scholar] [CrossRef]
  64. Chen, H.; Wang, F.; Wu, X.; Yuan, S.; Dong, H.; Zhou, C.; Feng, S.; Zhao, Z.; Si, L. Chronic Heat Stress Induces Oxidative Stress and Induces Inflammatory Injury in Broiler Spleen via TLRs/MyD88/NF-κB Signaling Pathway in Broilers. Vet. Sci. 2024, 11, 293. [Google Scholar] [CrossRef]
  65. Ginovart-Panisello, G.J.; Iriondo Sanz, I.; Panisello Monjo, T.; Riva, S.; Garriga Dicuzzo, T.; Abancens Escuer, E.; Alsina-Pagès, R.M. Trend and Representativeness of Acoustic Features of Broiler Chicken Vocalisations Related to CO2. Appl. Sci. 2022, 12, 10480. [Google Scholar] [CrossRef]
  66. Ginovart-Panisello, G.J.; Alsina-Pagès, R.M.; Sanz, I.I.; Monjo, T.P.; Prat, M.C. Acoustic Description of the Soundscape of a Real-Life Intensive Farm and Its Impact on Animal Welfare: A Preliminary Analysis of Farm Sounds and Bird Vocalisations. Sensors 2020, 20, 4732. [Google Scholar] [CrossRef]
  67. Herborn, K.A.; McElligott, A.G.; Mitchell, M.A.; Sandilands, V.; Bradshaw, B.; Asher, L. Spectral Entropy of Early-Life Distress Calls as an Iceberg Indicator of Chicken Welfare. J. R. Soc. Interface 2020, 17, 20200086. [Google Scholar] [CrossRef] [PubMed]
  68. Ginovart-Panisello, G.J.; Iriondo, I.; Monjo, T.P.; Riva, S.; Garcia, R.; Valls, J.; Alsina-Pagès, R.M. Acoustic Detection of the Effects of Prolonged Fasting on Newly Hatched Broiler Chickens. Comput. Electron. Agric. 2024, 219, 108763. [Google Scholar] [CrossRef]
  69. Hartcher, K.M.; Lum, H.K. Genetic Selection of Broilers and Welfare Consequences: A Review. Worlds Poult. Sci. J. 2020, 76, 154–167. [Google Scholar] [CrossRef]
  70. Lev-ron, T.; Yitzhaky, Y.; Halachmi, I.; Druyan, S. Classifying Vocal Responses of Broilers to Environmental Stressors via Artificial Neural Network. Animal 2025, 19, 101378. [Google Scholar] [CrossRef] [PubMed]
  71. Tainika, B.; Şekeroğlu, A.; Akyol, A.; Waithaka Ng’ang’a, Z. Welfare Issues in Broiler Chickens: Overview. Worlds Poult. Sci. J. 2023, 79, 285–329. [Google Scholar] [CrossRef]
  72. Kwon, B.-Y.; Park, J.; Kim, D.-H.; Lee, K.-W. Assessment of Welfare Problems in Broilers: Focus on Musculoskeletal Problems Associated with Their Rapid Growth. Animals 2024, 14, 1116. [Google Scholar] [CrossRef]
  73. Korver, D.R. Review: Current Challenges in Poultry Nutrition, Health, and Welfare. Animal 2023, 17, 100755. [Google Scholar] [CrossRef]
  74. EFSA AHAW Panel (EFSA Panel on Animal Health and Welfare); Nielsen, S.S.; Alvarez, J.; Bicout, D.J.; Calistri, P.; Canali, E.; Drewe, J.A.; Garin-Bastuji, B.; Gonzales Rojas, J.L.; Schmidt, C.G.; et al. Welfare of Broilers on Farm. EFSA J. 2023, 21, e07788. [Google Scholar] [CrossRef]
  75. de Jong, I.C.; Kar, S.K.; Kaspers, B. Frontiers in Broiler Chicken Welfare: Adopting Early Detection of Intestinal Integrity Loss in Broiler Welfare Assessment Protocols. Front. Vet. Sci. 2025, 12, 1593737. [Google Scholar] [CrossRef]
  76. Goller, F. Respiratory Contributions to Birdsong—Evolutionary Considerations and Open Questions. Philos. Trans. R. Soc. B Biol. Sci. 2025, 380, 20230431. [Google Scholar] [CrossRef] [PubMed]
  77. Casteleyn, C.; Cornillie, P.; Van Cruchten, S.; Van den Broeck, W.; Van Ginneken, C.; Simoens, P. Anatomy of the Upper Respiratory Tract in Domestic Birds, with Emphasis on Vocalization. Anat. Histol. Embryol. 2018, 47, 100–109. [Google Scholar] [CrossRef]
  78. Gaunt, A.S.; Gaunt, S.L.L.; Hector, D.H. Mechanics of the Syrinx in Gallus Gallus. I. A Comparison of Pressure Events in Chickens to Those in Oscines. Condor Ornithol. Appl. 1976, 78, 208–223. [Google Scholar] [CrossRef]
  79. Elemans, C.P.H.; Rasmussen, J.H.; Herbst, C.T.; Düring, D.N.; Zollinger, S.A.; Brumm, H.; Srivastava, K.; Svane, N.; Ding, M.; Larsen, O.N.; et al. Universal Mechanisms of Sound Production and Control in Birds and Mammals. Nat. Commun. 2015, 6, 8978. [Google Scholar] [CrossRef]
  80. Riede, T.; Goller, F. Peripheral Mechanisms for Vocal Production in Birds—Differences and Similarities to Human Speech and Singing. Brain Lang. 2010, 115, 69–80. [Google Scholar] [CrossRef] [PubMed]
  81. Nojiri, T.; Tobari, Y.; Furutera, T.; Ichimura, K.; Takechi, M. A Comparative Developmental Study of the Avian Syrinx: Insights into the Homology of the Sound-Producing Muscles in Birds. J. Anat. 2025, 246, 444–455. [Google Scholar] [CrossRef]
  82. Shawulu, J. Preliminary Qualitative Evaluation of the Anatomical Structures for Vocalization in the Chicken (Gallus gallus domestica). J. Vet. Biomed. Sci. 2021, 3, 13–18. [Google Scholar] [CrossRef]
  83. Oliveira, E.L.R.; Zuliani, F.; de Camargo, G.C.; Desantis, S.; Schimming, B.C. Morphology of the Syrinx of Three Species of Birds from Brazilian Cerrado (Psittacara Leucophthalmus, Rhynchotus Rufescens and Cariama Cristata): Gross Anatomy and Light Microscopy Study. Anat. Histol. Embryol. 2023, 52, 827–835. [Google Scholar] [CrossRef]
  84. Fournier, M.; Olson, R.; Van Wassenbergh, S.; Provini, P. The Avian Vocal System: 3D Reconstruction Reveals Upper Vocal Tract Elongation during Head Motion. J. Exp. Biol. 2024, 227, jeb247945. [Google Scholar] [CrossRef]
  85. Harris, C.L.; Gross, W.B.; Robeson, A. Vocal Acoustics of the Chicken. Poult. Sci. 1968, 47, 107–112. [Google Scholar] [CrossRef]
  86. Brackenbury, J.H. Respiratory Mechanics of Sound Production in Chickens and Geese. J. Exp. Biol. 1978, 72, 229–250. [Google Scholar] [CrossRef]
  87. van den Heuvel, H.; Youssef, A.; Grat, L.M.; Neethirajan, S. Quantifying the Effect of an Acute Stressor in Laying Hens Using Thermographic Imaging and Vocalisations. bioRxiv 2022. bioRxiv:2022.07.31.502171. [Google Scholar] [CrossRef]
  88. Neethirajan, S. Vocalization Patterns in Laying Hens—An Analysis of Stress-Induced Audio Responses. bioRxiv 2024. bioRxiv:2023.12.26.573338. [Google Scholar]
  89. Bermant, G. Intensity and Rate of Distress Calling in Chicks as a Function of Social Contact. Anim. Behav. 1963, 11, 514–517. [Google Scholar] [CrossRef]
  90. Tao, W.; Sun, Z.; Wang, G.; Xiao, S.; Liang, B.; Zhang, M.; Song, S. Broiler Sound Signal Filtering Method Based on Improved Wavelet Denoising and Effective Pulse Extraction. Comput. Electron. Agric. 2024, 221, 108948. [Google Scholar] [CrossRef]
  91. Tao, W.; Wang, G.; Sun, Z.; Xiao, S.; Wu, Q.; Zhang, M. Recognition Method for Broiler Sound Signals Based on Multi-Domain Sound Features and Classification Model. Sensors 2022, 22, 7935. [Google Scholar] [CrossRef] [PubMed]
  92. Tao, W.; Wang, G.; Sun, Z.; Xiao, S.; Pan, L.; Wu, Q.; Zhang, M. Feature Optimization Method for White Feather Broiler Health Monitoring Technology. Eng. Appl. Artif. Intell. 2023, 123, 106372. [Google Scholar] [CrossRef]
  93. Sun, Z.; Gao, M.; Zhang, M.; Lv, M.; Wang, G. Research on Recognition Method of Broiler Overlapping Sounds Based on Random Forest and Confidence Interval. Comput. Electron. Agric. 2023, 209, 107801. [Google Scholar] [CrossRef]
  94. Emmanouilidou, D.; McCollum, E.D.; Park, D.E.; Elhilali, M. Computerized Lung Sound Screening for Pediatric Auscultation in Noisy Field Environments. IEEE Trans. Biomed. Eng. 2018, 65, 1564–1574. [Google Scholar] [CrossRef]
  95. Yang, X.; Zhao, Y.; Qi, H.; Tabler, G.T. Characterizing Sounds of Different Sources in a Commercial Broiler House. Animals 2021, 11, 916. [Google Scholar] [CrossRef] [PubMed]
  96. Soster, P.d.C.; Grzywalski, T.; Hou, Y.; Thomas, P.; Dedeurwaerder, A.; De Gussem, M.; Tuyttens, F.; Devos, P.; Botteldooren, D.; Antonissen, G. Automated Detection of Broiler Vocalizations a Machine Learning Approach for Broiler Chicken Vocalization Monitoring. Poult. Sci. 2025, 104, 104962. [Google Scholar] [CrossRef] [PubMed]
  97. Du, X.; Carpentier, L.; Teng, G.; Liu, M.; Wang, C.; Norton, T. Assessment of Laying Hens’ Thermal Comfort Using Sound Technology. Sensors 2020, 20, 473. [Google Scholar] [CrossRef] [PubMed]
  98. Li, N.; Ren, Z.; Li, D.; Zeng, L. Review: Automated Techniques for Monitoring the Behaviour and Welfare of Broilers and Laying Hens: Towards the Goal of Precision Livestock Farming. Animal 2020, 14, 617–625. [Google Scholar] [CrossRef]
  99. Du, X.; Lao, F.; Teng, G. A Sound Source Localisation Analytical Method for Monitoring the Abnormal Night Vocalisations of Poultry. Sensors 2018, 18, 2906. [Google Scholar] [CrossRef]
  100. Mcloughlin, M.P.; Stewart, R.; McElligott, A.G. Automated Bioacoustics: Methods in Ecology and Conservation and Their Potential for Animal Welfare Monitoring. J. R. Soc. Interface 2019, 16, 20190225. [Google Scholar] [CrossRef]
  101. Sun, Z.; Zhang, M.; Liu, J.; Wu, Q.; Wang, J.; Wang, G. Research on Filtering and Classification Method for White-Feather Broiler Sound Signals Based on Sparse Representation. Eng. Appl. Artif. Intell. 2024, 127, 107348. [Google Scholar] [CrossRef]
  102. Yang, X.; Zhao, Y.; Qi, H.; Tabler, T. Characterizing Sounds under Commercial Broiler Environment. In 2021 ASABE Annual International Virtual Meeting; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2021; p. 1. [Google Scholar] [CrossRef]
  103. Aydin, A.; Berckmans, D. Using Sound Technology to Automatically Detect the Short-Term Feeding Behaviours of Broiler Chickens. Comput. Electron. Agric. 2016, 121, 25–31. [Google Scholar] [CrossRef]
  104. Darras, K.; Batáry, P.; Furnas, B.J.; Grass, I.; Mulyani, Y.A.; Tscharntke, T. Autonomous Sound Recording Outperforms Human Observation for Sampling Birds: A Systematic Map and User Guide. Ecol. Appl. 2019, 29, e01954. [Google Scholar] [CrossRef]
  105. Sugai, L.S.M.; Desjonquères, C.; Silva, T.S.F.; Llusia, D. A Roadmap for Survey Designs in Terrestrial Acoustic Monitoring. Remote Sens. Ecol. Conserv. 2020, 6, 220–235. [Google Scholar] [CrossRef]
  106. MacPhail, A.G.; Yip, D.A.; Knight, E.C.; Hedley, R.; Knaggs, M.; Shonfield, J.; Upham-Mills, E.; Bayne, E.M. Audio Data Compression Affects Acoustic Indices and Reduces Detections of Birds by Human Listening and Automated Recognisers. Bioacoustics 2024, 33, 74–90. [Google Scholar] [CrossRef]
  107. Fontana, I.; Tullo, E.; Butterworth, A. 5.3. The Use of Vocalisation Sounds to Assess Responses of Broiler Chicken to Environmental Variables. In Precision Livestock Farming Applications; Wageningen Academic: Wageningen, The Netherlands, 2015; pp. 187–198. ISBN 978-90-8686-815-5. [Google Scholar]
  108. Fontana, I.; Tullo, E.; Butterworth, A.; Guarino, M. An Innovative Approach to Predict the Growth in Intensive Poultry Farming. Comput. Electron. Agric. 2015, 119, 178–183. [Google Scholar] [CrossRef]
  109. Goo, D.; Kim, J.H.; Choi, H.S.; Park, G.H.; Han, G.P.; Kil, D.Y. Effect of Stocking Density and Sex on Growth Performance, Meat Quality, and Intestinal Barrier Function in Broiler Chickens. Poult. Sci. 2019, 98, 1153–1160. [Google Scholar] [CrossRef] [PubMed]
  110. Gao, X.; Gong, J.; Yang, B.; Liu, Y.; Xu, H.; Hao, Y.; Jing, J.; Feng, Z.; Li, L. Effect of Classical Music on Growth Performance, Stress Level, Antioxidant Index, Immune Function and Meat Quality in Broilers at Different Stocking Densities. Front. Vet. Sci. 2023, 10, 1227654. [Google Scholar] [CrossRef]
  111. van Osta, J.M.; Dreis, B.; Meyer, E.; Grogan, L.F.; Castley, J.G. An Active Learning Framework and Assessment of Inter-Annotator Agreement Facilitate Automated Recogniser Development for Vocalisations of a Rare Species, the Southern Black-Throated Finch (Poephila Cincta Cincta). Ecol. Inform. 2023, 77, 102233. [Google Scholar] [CrossRef]
  112. Turlington, K.; Suárez-Castro, A.F.; Teixeira, D.; Linke, S.; Sheldon, F. A Novel Protocol for Exploratory Analysis of Unknown Sound-Types in Large Acoustic Datasets. Methods Ecol. Evol. 2025, 16, 2487–2499. [Google Scholar] [CrossRef]
  113. Mahon, L.; Hoffman, B.; James, L.; Cusimano, M.; Hagiwara, M.; Woolley, S.C.; Effenberger, F.; Keen, S.; Liu, J.-Y.; Pietquin, O. Robust Detection of Overlapping Bioacoustic Sound Events. arXiv 2025, arXiv:2503.02389. [Google Scholar] [CrossRef]
  114. Fontana, I.; Tullo, E.; Carpentier, L.; Berckmans, D.; Butterworth, A.; Vranken, E.; Norton, T.; Berckmans, D.; Guarino, M. Sound Analysis to Model Weight of Broiler Chickens. Poult. Sci. 2017, 96, 3938–3943. [Google Scholar] [CrossRef]
  115. Ginovart-Panisello, G.J.; Alsina-Pagès, R.M. Preliminary Acoustic Analysis of Farm Management Noise and Its Impact on Broiler Welfare. Proceedings 2019, 42, 83. [Google Scholar] [CrossRef]
  116. Aydin, A.; Bahr, C.; Berckmans, D. A Real-Time Monitoring Tool to Automatically Measure the Feed Intakes of Multiple Broiler Chickens by Sound Analysis. Comput. Electron. Agric. 2015, 114, 1–6. [Google Scholar] [CrossRef]
  117. Amirivojdan, A.; Nasiri, A.; Zhou, S.; Zhao, Y.; Gan, H. ChickenSense: A Low-Cost Deep Learning-Based Solution for Poultry Feed Consumption Monitoring Using Sound Technology. AgriEngineering 2024, 6, 2115–2129. [Google Scholar] [CrossRef]
  118. Moreno-Blanco, M.; Vidaña-Vila, E.; Panisello, T.; Riva, S.; García, J.M.; Alsina-Pagès, R.M.; Ginovart-Panisello, G.J. Preliminary Results of an Acoustic Description of Broiler Chickens Vocalizations in a Commercial Hatchery. Smart Agric. Technol. 2025, 12, 101310. [Google Scholar] [CrossRef]
  119. Talha, M.; Ghafoor, H.; Nam, S.Y. A Unified Approach to Voice Classification: Leveraging Spectrograms, Mel Spectrograms, and Statistical Features. IEEE Access 2025, 13, 133827–133836. [Google Scholar] [CrossRef]
  120. Wolf-Monheim, F. Spectral and Rhythm Feature Performance Evaluation for Category and Class Level Audio Classification with Deep Convolutional Neural Networks. arXiv 2025, arXiv:2509.07756. [Google Scholar] [CrossRef]
  121. Wolf-Monheim, F. Spectral and Rhythm Features for Audio Classification with Deep Convolutional Neural Networks. arXiv 2024, arXiv:2410.06927. [Google Scholar] [CrossRef]
  122. Mannem, K.R.; Mengiste, E.; Hasan, S.; de Soto, B.G.; Sacks, R. Smart Audio Signal Classification for Tracking of Construction Tasks. Autom. Constr. 2024, 165, 105485. [Google Scholar] [CrossRef]
  123. Wu, X.; Zhou, S.; Chen, M.; Zhao, Y.; Wang, Y.; Zhao, X.; Li, D.; Pu, H. Combined Spectral and Speech Features for Pig Speech Recognition. PLoS ONE 2022, 17, e0276778. [Google Scholar] [CrossRef]
  124. Pereira, L.M.; Salazar, A.; Vergara, L. A Comparative Analysis of Early and Late Fusion for the Multimodal Two-Class Problem. IEEE Access 2023, 11, 84283–84300. [Google Scholar] [CrossRef]
  125. Mervitz, J.H.; de Villiers, J.P.; Jacobs, J.P.; Kloppers, M.H.O. Comparison of Early and Late Fusion Techniques for Movie Trailer Genre Labelling. In Proceedings of the 2020 IEEE 23rd International Conference on Information Fusion (FUSION), Rustenburg, South Africa, 6–9 July 2020; pp. 1–8. [Google Scholar]
  126. Nykoniuk, M.; Basystiuk, O.; Shakhovska, N.; Melnykova, N. Multimodal Data Fusion for Depression Detection Approach. Computation 2025, 13, 9. [Google Scholar] [CrossRef]
  127. Saeed, N.; Alam, M.; Nyberg, R.G. A Multimodal Deep Learning Approach for Gravel Road Condition Evaluation through Image and Audio Integration. Transp. Eng. 2024, 16, 100228. [Google Scholar] [CrossRef]
  128. Galanakis, I.; Soldatos, R.F.; Karanikolas, N.; Voulodimos, A.; Voyiatzis, I.; Samarakou, M. Early and Late Fusion for Multimodal Aggression Prediction in Dementia Patients: A Comparative Analysis. Appl. Sci. 2025, 15, 5823. [Google Scholar] [CrossRef]
  129. Alamir, M.A. A Novel Acoustic Scene Classification Model Using the Late Fusion of Convolutional Neural Networks and Different Ensemble Classifiers. Appl. Acoust. 2021, 175, 107829. [Google Scholar] [CrossRef]
  130. Xie, J.; Zhu, M. Handcrafted Features and Late Fusion with Deep Learning for Bird Sound Classification. Ecol. Inform. 2019, 52, 74–81. [Google Scholar] [CrossRef]
  131. Huang, J. An Analytical Method for Recognizing Cat Meowing States Utilizing Short-Time Fourier Transform and Vision Transformers. Membr. Technol. 2025, 2024, 132–147. [Google Scholar] [CrossRef]
  132. Lu, D.-Y.; Ding, J.-J. High-Order Sychrosqueezing Transforms and Its Application for Highly Accurate Animal Voice Analysis. In Proceedings of the 2021 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), Penghu, Taiwan, 15–17 September 2021; pp. 1–2. [Google Scholar]
  133. Jancovich, B.A.; Rogers, T.L. BASSA: New Software Tool Reveals Hidden Details in Visualisation of Low-Frequency Animal Sounds. Ecol. Evol. 2024, 14, e11636. [Google Scholar] [CrossRef]
  134. Banakar, A.; Sadeghi, M.; Shushtari, A. An Intelligent Device for Diagnosing Avian Diseases: Newcastle, Infectious Bronchitis, Avian Influenza. Comput. Electron. Agric. 2016, 127, 744–753. [Google Scholar] [CrossRef]
  135. Abdul, Z.K.; Al-Talabani, A.K. Mel Frequency Cepstral Coefficient and Its Applications: A Review. IEEE Access 2022, 10, 122136–122158. [Google Scholar] [CrossRef]
  136. Lee, C.-H.; Chou, C.-H.; Han, C.-C.; Huang, R.-Z. Automatic Recognition of Animal Vocalizations Using Averaged MFCC and Linear Discriminant Analysis. Pattern Recognit. Lett. 2006, 27, 93–101. [Google Scholar] [CrossRef]
  137. Bishop, J.C.; Falzon, G.; Trotter, M.; Kwan, P.; Meek, P.D. Livestock Vocalisation Classification in Farm Soundscapes. Comput. Electron. Agric. 2019, 162, 531–542. [Google Scholar] [CrossRef]
  138. Cheema, M.; Basker, V.R. Animal Sound Classification for Wildlife Surveillance. In Proceedings of the 2025 International Conference on Emerging Systems and Intelligent Computing (ESIC), Bhubaneswar, India, 8–9 February 2025; pp. 497–501. [Google Scholar]
  139. Manikandan, V.; Neethirajan, S. AI-Powered Vocalization Analysis in Poultry: Systematic Review of Health, Behavior, and Welfare Monitoring. Sensors 2025, 25, 4058. [Google Scholar] [CrossRef]
  140. Han, N.C.; Muniandy, S.V.; Dayou, J. Acoustic Classification of Australian Anurans Based on Hybrid Spectral-Entropy Approach. Appl. Acoust. 2011, 72, 639–645. [Google Scholar] [CrossRef]
  141. Rademan, M.W.; Versfeld, D.J.J.; du Preez, J.A. Soft-Output Signal Detection for Cetacean Vocalizations Using Spectral Entropy, k-Means Clustering and the Continuous Wavelet Transform. Ecol. Inform. 2023, 74, 101990. [Google Scholar] [CrossRef]
  142. Huang, J.; Zhang, T.; Cuan, K.; Fang, C. An Intelligent Method for Detecting Poultry Eating Behaviour Based on Vocalization Signals. Comput. Electron. Agric. 2021, 180, 105884. [Google Scholar] [CrossRef]
  143. Colonna, J.G.; Cristo, M.; Salvatierra, M.; Nakamura, E.F. An Incremental Technique for Real-Time Bioacoustic Signal Segmentation. Expert Syst. Appl. 2015, 42, 7367–7374. [Google Scholar] [CrossRef]
  144. Marck, A.; Vortman, Y.; Kolodny, O.; Lavner, Y. Identification, Analysis and Characterization of Base Units of Bird Vocal Communication: The White Spectacled Bulbul (Pycnonotus Xanthopygos) as a Case Study. Front. Behav. Neurosci. 2022, 15, 812939. [Google Scholar] [CrossRef]
  145. Toledo-Pérez, D.C.; Rodríguez-Reséndiz, J.; Gómez-Loenzo, R.A. A Study of Computing Zero Crossing Methods and an Improved Proposal for EMG Signals. IEEE Access 2020, 8, 8783–8790. [Google Scholar] [CrossRef]
  146. Ginovart-Panisello, G.J.; Iriondo, I.; Panisello Monjo, T.; Riva, S.; Cancer, J.C.; Alsina-Pagès, R.M. Acoustic Detection of Vaccine Reactions in Hens for Assessing Anti-Inflammatory Product Efficacy. Appl. Sci. 2024, 14, 2156. [Google Scholar] [CrossRef]
  147. Hossain, M.M.; Han, T.A.; Ara, S.S.; Shamszaman, Z.U. Benchmarking Classical, Deep, and Generative Models for Human Activity Recognition. arXiv 2025, arXiv:2501.08471. [Google Scholar] [CrossRef]
  148. Wang, P.; Fan, E.; Wang, P. Comparative Analysis of Image Classification Algorithms Based on Traditional Machine Learning and Deep Learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar] [CrossRef]
  149. Usherwood, P.; Smit, S. Low-Shot Classification: A Comparison of Classical and Deep Transfer Machine Learning Approaches. arXiv 2019, arXiv:1907.07543. [Google Scholar] [CrossRef]
  150. Mishra, A.; Akhtar, R. Music Genre Classification Using Machine Learning Techniques. arXiv 2025, arXiv:2509.01762. [Google Scholar] [CrossRef]
  151. Zaman, K.; Sah, M.; Direkoglu, C.; Unoki, M. A Survey of Audio Classification Using Deep Learning. IEEE Access 2023, 11, 106620–106649. [Google Scholar] [CrossRef]
  152. Lhoest, L.; Lamrini, M.; Vandendriessche, J.; Wouters, N.; da Silva, B.; Chkouri, M.Y.; Touhafi, A. MosAIc: A Classical Machine Learning Multi-Classifier Based Approach against Deep Learning Classifiers for Embedded Sound Classification. Appl. Sci. 2021, 11, 8394. [Google Scholar] [CrossRef]
  153. Fabian, C.L.; Rosales, M.; Orillaza, N.; Hizon, J.R. Evaluating Classical Machine Learning Algorithms for Multimodal Human Activity Recognition. In Proceedings of the 2025 International Conference on Advanced Machine Learning and Data Science (AMLDS), Tokyo, Japan, 19–21 July 2025; pp. 428–432. [Google Scholar]
  154. Maissae, H.; Abdelaziz, B. Forest-ORE: Mining Optimal Rule Ensemble to Interpret Random Forest Models. arXiv 2024, arXiv:2403.17588. [Google Scholar] [CrossRef]
  155. Hu, J.; Szymczak, S. A Review on Longitudinal Data Analysis with Random Forest. Brief. Bioinform. 2023, 24, bbad002. [Google Scholar] [CrossRef]
  156. Ziegler, A.; König, I.R. Mining Data with Random Forests: Current Options for Real-World Applications. WIREs Data Min. Knowl. Discov. 2014, 4, 55–63. [Google Scholar] [CrossRef]
  157. Kinasih, A.N.S.; Handayani, A.N.; Ardiansah, J.T.; Damanhuri, N.S. Comparative Analysis of Decision Tree and Random Forest Classifiers for Structured Data Classification in Machine Learning. Sci. Inf. Technol. Lett. 2024, 5, 13–24. [Google Scholar] [CrossRef]
  158. Desta, M.; Mulu, C.; Mohammed, A. Understanding the Decision-Making Process of Music Genre Classification with Explainable Algorithms. Multimed. Res. 2025, 8, 1–10. [Google Scholar]
  159. Hsu, F.-S.; Huang, S.-R.; Huang, C.-W.; Huang, C.-J.; Cheng, Y.-R.; Chen, C.-C.; Hsiao, J.; Chen, C.-W.; Chen, L.-C.; Lai, Y.-C.; et al. Benchmarking of Eight Recurrent Neural Network Variants for Breath Phase and Adventitious Sound Detection on a Self-Developed Open-Access Lung Sound Database—HF_Lung_V1. PLoS ONE 2021, 16, e0254134. [Google Scholar] [CrossRef]
  160. Chu, Y.; Wang, Q.; Zhou, E.; Liu, Q.; Zheng, G. EZhouNet: A Framework Based on Graph Neural Network and Anchor Interval for the Respiratory Sound Event Detection. Biomed. Signal Process. Control 2026, 112, 108491. [Google Scholar] [CrossRef]
  161. Acharya, J.; Basu, A. Deep Neural Network for Respiratory Sound Classification in Wearable Devices Enabled by Patient Specific Model Tuning. IEEE Trans. Biomed. Circuits Syst. 2020, 14, 535–544. [Google Scholar] [CrossRef]
  162. Le, K.-N.T.; Byun, G.; Raza, S.M.; Le, D.-T.; Choo, H. Respiratory Anomaly and Disease Detection Using Multi-Level Temporal Convolutional Networks. IEEE J. Biomed. Health Inform. 2025, 29, 4834–4846. [Google Scholar] [CrossRef]
  163. Fernando, T.; Sridharan, S.; Denman, S.; Ghaemmaghami, H.; Fookes, C. Robust and Interpretable Temporal Convolution Network for Event Detection in Lung Sound Recordings. IEEE J. Biomed. Health Inform. 2022, 26, 2898–2908. [Google Scholar] [CrossRef]
  164. Xie, Y.; Wang, J.; Chen, C.; Yin, T.; Yang, S.; Li, Z.; Zhang, Y.; Ke, J.; Song, L.; Gan, L. Sound Identification of Abnormal Pig Vocalizations: Enhancing Livestock Welfare Monitoring on Smart Farms. Inf. Process. Manag. 2024, 61, 103770. [Google Scholar] [CrossRef]
  165. Gong, Y.; Chung, Y.-A.; Glass, J. AST: Audio Spectrogram Transformer. arXiv 2021, arXiv:2104.01778. [Google Scholar] [CrossRef]
  166. Manikandan, V.; Neethirajan, S. Decoding Poultry Vocalizations—Natural Language Processing and Transformer Models for Semantic and Emotional Analysis. bioRxiv 2024. bioRxiv:2024.12.18.629057. [Google Scholar]
  167. Choi, H.; Zhang, L.; Watkins, C. Dual Representations: A Novel Variant of Self-Supervised Audio Spectrogram Transformer with Multi-Layer Feature Fusion and Pooling Combinations for Sound Classification. Neurocomputing 2025, 623, 129415. [Google Scholar] [CrossRef]
  168. Carvalho, M.; Pinho, A.J.; Brás, S. Resampling Approaches to Handle Class Imbalance: A Review from a Data Perspective. J. Big Data 2025, 12, 71. [Google Scholar] [CrossRef]
  169. Desuky, A.S.; Hussain, S. An Improved Hybrid Approach for Handling Class Imbalance Problem. Arab. J. Sci. Eng. 2021, 46, 3853–3864. [Google Scholar] [CrossRef]
  170. Kaope, C.; Pristyanto, Y. The Effect of Class Imbalance Handling on Datasets Toward Classification Algorithm Performance. MATRIK J. Manaj. Tek. Inform. Dan Rekayasa Komput. 2023, 22, 227–238. [Google Scholar] [CrossRef]
  171. Khan, A.A.; Chaudhari, O.; Chandra, R. A Review of Ensemble Learning and Data Augmentation Models for Class Imbalanced Problems: Combination, Implementation and Evaluation. Expert Syst. Appl. 2023, 244, 122778. [Google Scholar] [CrossRef]
  172. Buda, M.; Maki, A.; Mazurowski, M.A. A Systematic Study of the Class Imbalance Problem in Convolutional Neural Networks. Neural Netw. 2018, 106, 249–259. [Google Scholar] [CrossRef]
  173. Xu, Z.; Shen, D.; Kou, Y.; Nie, T. A Synthetic Minority Oversampling Technique Based on Gaussian Mixture Model Filtering for Imbalanced Data Classification. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 3740–3753. [Google Scholar] [CrossRef] [PubMed]
  174. Salazar, A.; Vergara, L.; Safont, G. Generative Adversarial Networks and Markov Random Fields for Oversampling Very Small Training Sets. Expert Syst. Appl. 2021, 163, 113819. [Google Scholar] [CrossRef]
  175. Dablain, D.; Krawczyk, B.; Chawla, N.V. DeepSMOTE: Fusing Deep Learning and SMOTE for Imbalanced Data. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 6390–6404. [Google Scholar] [CrossRef]
  176. Udu, A.G.; Salman, M.T.; Ghalati, M.K.; Lecchini-Visintini, A.; Siddle, D.R.; Dong, H. Emerging SMOTE and GAN Variants for Data Augmentation in Imbalance Machine Learning Tasks: A Review. IEEE Access 2025, 13, 113838–113853. [Google Scholar] [CrossRef]
  177. Wang, X.; Yin, Y.; Dai, X.; Shen, W.; Kou, S.; Dai, B. Automatic Detection of Continuous Pig Cough in a Complex Piggery Environment. Biosyst. Eng. 2024, 238, 78–88. [Google Scholar] [CrossRef]
  178. Okinda, C.; Lu, M.; Liu, L.; Nyalala, I.; Muneri, C.; Wang, J.; Zhang, H.; Shen, M. A Machine Vision System for Early Detection and Prediction of Sick Birds: A Broiler Chicken Model. Biosyst. Eng. 2019, 188, 229–242. [Google Scholar] [CrossRef]
  179. Hossain, M.S.; Pramanik, S.K.; Rahman, A.; Ali, S.; Islam, S.M.M. Non-Contact Vital Signs Monitoring in Broiler Chickens. J. Eng. 2023, 2023, e12320. [Google Scholar] [CrossRef]
  180. Lashari, M.H.; Karim, S.; Alhussein, M.; Hoshu, A.A.; Aurangzeb, K.; Anwar, M.S. Internet of Things-Based Sustainable Environment Management for Large Indoor Facilities. PeerJ Comput. Sci. 2023, 9, e1623. [Google Scholar] [CrossRef]
  181. Pramono, R.X.A.; Bowyer, S.; Rodriguez-Villegas, E. Automatic Adventitious Respiratory Sound Analysis: A Systematic Review. PLoS ONE 2017, 12, e0177926. [Google Scholar] [CrossRef]
  182. Zhuang, X.; Bi, M.; Guo, J.; Wu, S.; Zhang, T. Development of an Early Warning Algorithm to Detect Sick Broilers. Comput. Electron. Agric. 2018, 144, 102–113. [Google Scholar] [CrossRef]
  183. [INPp1-1] Respiratory Disease Identification of Chicken Voiceprint Based on Audio Processing and Machine Learning Technology. Available online: https://pub.confit.atlas.jp/en/event/idw2023/presentation/INPp1-01 (accessed on 29 December 2025).
  184. Jakovljević, N.; Maljković, N.; Mišković, D.; Knežević, P.; Delić, V. A Broiler Stress Detection System Based on Audio Signal Processing. In Proceedings of the 2019 27th Telecommunications Forum (℡FOR), Belgrade, Serbia, 26–27 November 2019; pp. 1–4. [Google Scholar]
  185. Neethirajan, S. Adapting a Large-Scale Transformer Model to Decode Chicken Vocalizations: A Non-Invasive AI Approach to Poultry Welfare. AI 2025, 6, 65. [Google Scholar] [CrossRef]
  186. Anderson, M.G.; Campbell, A.M.; Crump, A.; Arnott, G.; Newberry, R.C.; Jacobs, L. Effect of Environmental Complexity and Stocking Density on Fear and Anxiety in Broiler Chickens. Animals 2021, 11, 2383. [Google Scholar] [CrossRef] [PubMed]
  187. Jacobs, L.; Blatchford, R.A.; de Jong, I.C.; Erasmus, M.A.; Levengood, M.; Newberry, R.C.; Regmi, P.; Riber, A.B.; Weimer, S.L. Enhancing Their Quality of Life: Environmental Enrichment for Poultry. Poult. Sci. 2023, 102, 102233. [Google Scholar] [CrossRef]
  188. Ghani, A.; Mehmood, S.; Hussnain, F. Saima Effects of Different Environmental Enrichment Tools to Improve Behavior, Welfare, and Growth Performance of Broiler Chickens. Trop. Anim. Health Prod. 2025, 57, 33. [Google Scholar] [CrossRef]
  189. Riber, A.B.; van de Weerd, H.A.; de Jong, I.C.; Steenfeldt, S. Review of Environmental Enrichment for Broiler Chickens. Poult. Sci. 2018, 97, 378–396. [Google Scholar] [CrossRef] [PubMed]
  190. Wilhelmsson, S.; Yngvesson, J.; Jönsson, L.; Gunnarsson, S.; Wallenbeck, A. Welfare Quality® Assessment of a Fast-Growing and a Slower-Growing Broiler Hybrid, Reared until 10 Weeks and Fed a Low-Protein, High-Protein or Mussel-Meal Diet. Livest. Sci. 2019, 219, 71–79. [Google Scholar] [CrossRef]
  191. Dixon, L.M. Slow and Steady Wins the Race: The Behaviour and Welfare of Commercial Faster Growing Broiler Breeds Compared to a Commercial Slower Growing Breed. PLoS ONE 2020, 15, e0231006. [Google Scholar] [CrossRef]
  192. Thomas, P.; Grzywalski, T.; Hou, Y.; Soster De Carvalho, P.; De Gussem, M.; Antonissen, G.; Tuyttens, F.; De Poorter, E.; Devos, P.; Botteldooren, D. Using a Neural Network Based Vocalization Detector for Broiler Welfare Monitoring. In Proceedings of the 10th Convention of the European Acoustics Association Forum Acusticum 2023; European Acoustics Association: Turin, Italy, 2024; pp. 6269–6276. [Google Scholar]
  193. Selle, M.; Spieß, F.; Visscher, C.; Rautenschlein, S.; Jung, A.; Auerbach, M.; Hartung, J.; Sürie, C.; Distl, O. Real-Time Monitoring of Animals and Environment in Broiler Precision Farming—How Robust Is the Data Quality? Sustainability 2023, 15, 15527. [Google Scholar] [CrossRef]
  194. Brassó, L.D.; Komlósi, I.; Várszegi, Z. Modern Technologies for Improving Broiler Production and Welfare: A Review. Animals 2025, 15, 493. [Google Scholar] [CrossRef]
  195. Cruz, E.; Hidalgo-Rodriguez, M.; Acosta-Reyes, A.M.; Rangel, J.C.; Boniche, K.; Gonzalez-Olivardia, F. ACMSPT: Automated Counting and Monitoring System for Poultry Tracking. AgriEngineering 2025, 7, 86. [Google Scholar] [CrossRef]
  196. Khan, I.; Peralta, D.; Fontaine, J.; Soster de Carvalho, P.; Martos Martinez-Caja, A.; Antonissen, G.; Tuyttens, F.; De Poorter, E. Monitoring Welfare of Individual Broiler Chickens Using Ultra-Wideband and Inertial Measurement Unit Wearables. Sensors 2025, 25, 811. [Google Scholar] [CrossRef]
  197. Mesaros, A.; Diment, A.; Elizalde, B.; Heittola, T.; Vincent, E.; Raj, B.; Virtanen, T. Sound Event Detection in the DCASE 2017 Challenge. IEEEACM Trans. Audio Speech Lang. Process. 2019, 27, 992–1006. [Google Scholar] [CrossRef]
  198. Gao, B.; Zhang, W.; Hao, D.; Yang, K.; Chen, C. CF-DETR: A Lightweight Real-Time Model for Chicken Face Detection in High-Density Poultry Farming. Animals 2025, 15, 2919. [Google Scholar] [CrossRef] [PubMed]
  199. Gao, B.; Guo, Y.; Zheng, P.; Yang, K.; Chen, C. A Novel Lightweight Framework for Non-Contact Broiler Face Identification in Intensive Farming. Sensors 2025, 25, 4051. [Google Scholar] [CrossRef]
  200. Doko, L.A.; Jiya, E.A.; Olanrewaju, O.M. Real-Time Infection Detection System in Broiler Farm Using MobileNetSSD Model. J. Sci. Res. Rev. 2024, 1, 78–86. [Google Scholar] [CrossRef]
  201. Kashiha, M.; Pluk, A.; Bahr, C.; Vranken, E.; Berckmans, D. Development of an Early Warning System for a Broiler House Using Computer Vision. Biosyst. Eng. 2013, 116, 36–45. [Google Scholar] [CrossRef]
  202. Srinivasagan, R.; Sayed, M.S.E.; Al-Rasheed, M.I.; Alzahrani, A.S. Edge Intelligence for Poultry Welfare: Utilizing Tiny Machine Learning Neural Network Processors for Vocalization Analysis. PLoS ONE 2025, 20, e0316920. [Google Scholar] [CrossRef]
  203. Klontzas, M.E.; Groot Lipman, K.B.W.; Akinci D’ Antonoli, T.; Andreychenko, A.; Cuocolo, R.; Dietzel, M.; Gitto, S.; Huisman, H.; Santinha, J.; Vernuccio, F.; et al. ESR Essentials: Common Performance Metrics in AI—Practice Recommendations by the European Society of Medical Imaging Informatics. Eur. Radiol. 2025, 1–13. [Google Scholar] [CrossRef]
  204. Alzuhair, A.; Alghaihab, A. The Design and Optimization of an Acoustic and Ambient Sensing AIoT Platform for Agricultural Applications. Sensors 2023, 23, 6262. [Google Scholar] [CrossRef]
  205. Panagi, P.; Karatsiolis, S.; Mosphilis, K.; Hadjisavvas, N.; Kamilaris, A.; Nicolaou, N.; Stavrakis, E.; Vassiliades, V. Poultry Farm Intelligence: An Integrated Multi-Sensor AI Platform for Enhanced Welfare and Productivity. arXiv 2025, arXiv:2510.15757. [Google Scholar] [CrossRef]
  206. Kate, M.; Neethirajan, S. Giving Cows a Digital Voice—AI-Enabled Bioacoustics and Smart Sensing in Precision Livestock Management. Ann. Anim. Sci. 2025. [Google Scholar] [CrossRef]
  207. Cartus, A.R.; Samuels, E.A.; Cerdá, M.; Marshall, B.D.L. Outcome Class Imbalance and Rare Events: An Underappreciated Complication for Overdose Risk Prediction Modeling. Addiction 2023, 118, 1167–1176. [Google Scholar] [CrossRef]
  208. Garrido, L.F.C.; Rodrigues, G.S.T.; Costa, L.B.; Kurtz, D.J.; Daros, R.R. Validation of a Swine Cough Monitoring System Under Field Conditions. AgriEngineering 2025, 7, 140. [Google Scholar] [CrossRef]
  209. Pann, V.; Kwon, K.; Kim, B.; Jang, D.-H.; Kim, J.-B. DCNN for Pig Vocalization and Non-Vocalization Classification: Evaluate Model Robustness with New Data. Animals 2024, 14, 2029. [Google Scholar] [CrossRef]
  210. Pathonsuwan, W.; Phapatanaburi, K.; Orkweha, K.; Jumphoo, T.; Anchuen, P.; Kokkhunthod, K.; Rattanasak, A.; Uthansakul, M.; Uthansakul, P. Weighted Feature Fusion of CNNs Utilizing Magnitude and Phase Information for Chicken Health Detection. IEEE Trans. Autom. Sci. Eng. 2025, 22, 19279–19293. [Google Scholar] [CrossRef]
  211. Li, G. A Survey of Open-Access Datasets for Computer Vision in Precision Poultry Farming. Poult. Sci. 2025, 104, 104784. [Google Scholar] [CrossRef] [PubMed]
  212. Wu, Z.; Willems, S.; Liu, D.; Norton, T. How AI Improves Sustainable Chicken Farming: A Literature Review of Welfare, Economic, and Environmental Dimensions. Agriculture 2025, 15, 2028. [Google Scholar] [CrossRef]
  213. Bao, J.; Xie, Q. Artificial Intelligence in Animal Farming: A Systematic Literature Review. J. Clean. Prod. 2022, 331, 129956. [Google Scholar] [CrossRef]
  214. Jana, R.; Dixit, S.; Sharma, M.; Kumar, R. An Explainable AI Based Approach for Monitoring Animal Health. arXiv 2025, arXiv:2508.10210. [Google Scholar] [CrossRef]
  215. Rudin, C. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
  216. Clarke, J. 67 Leveraging Ai/Ml to Address Critical Challenges in Livestock Research. J. Anim. Sci. 2023, 101, 142. [Google Scholar] [CrossRef]
  217. Santana, T.C.; Guiselini, C.; Pandorfi, H.; Vigoderis, R.B.; Barbosa Filho, J.A.D.; Soares, R.G.F.; Araújo, M.d.F.; Gomes, N.F.; de Lima, L.D.; Santos, P.C.d.S. Ethics, Animal Welfare, and Artificial Intelligence in Livestock: A Bibliometric Review. AgriEngineering 2025, 7, 202. [Google Scholar] [CrossRef]
  218. Hampel, G.; Fabulya, Z. The Risks of AI in Agriculture. Analecta Tech. Szeged. 2024, 18, 32–44. [Google Scholar] [CrossRef]
  219. Mou, A.; Milanova, M. Performance Analysis of Deep Learning Model-Compression Techniques for Audio Classification on Edge Devices. Sci 2024, 6, 21. [Google Scholar] [CrossRef]
  220. Shah, H. Low Power Edge AI for Real Time Sound Classification. In Proceedings of the 2025 12th International Conference on Emerging Trends in Engineering & Technology—Signal and Information Processing (ICETET—SIP), Nagpur, India, 1–2 August 2025; pp. 1–6. [Google Scholar]
  221. Mohaimenuzzaman, M.; Bergmeir, C.; Meyer, B. Pruning vs XNOR-Net: A Comprehensive Study of Deep Learning for Audio Classification on Edge-Devices. IEEE Access 2022, 10, 6696–6707. [Google Scholar] [CrossRef]
  222. Muhamad, W.L.; Dzulhijjah, D.A.; Robbani, F.D.; Tyas, D.A. On-Edge Device Optimization Using Multiple Classification Method for a Cat and Dog Audio Classifier. In Proceedings of the 2025 International Conference on Advancement in Data Science, E-learning and Information System (ICADEIS), Bandung, Indonesia, 3–4 February 2025; pp. 1–6. [Google Scholar]
  223. Christoph, G.; Adrian, F.; Tobias, H.; Bernardo, P.P.; Lübeck, K.; Oliver, B. Hardware Accelerator and Neural Network Co-Optimization for Ultra-Low-Power Audio Processing Devices. In Proceedings of the 2022 25th Euromicro Conference on Digital System Design (DSD), Maspalomas, Spain, 31 August–2 September 2022; pp. 365–369. [Google Scholar]
  224. Faizan, M.; Intzes, I.; Cretu, I.; Meng, H. Implementation of Deep Learning Models on an SoC-FPGA Device for Real-Time Music Genre Classification. Technologies 2023, 11, 91. [Google Scholar] [CrossRef]
  225. Mohaimenuzzaman, M.; Bergmeir, C.; West, I.; Meyer, B. Environmental Sound Classification on the Edge: A Pipeline for Deep Acoustic Networks on Extremely Resource-Constrained Devices. Pattern Recognit. 2023, 133, 109025. [Google Scholar] [CrossRef]
  226. Gairí, P.; Pallejà, T.; Tresanchez, M. Environmental Sound Recognition on Embedded Devices Using Deep Learning: A Review. Artif. Intell. Rev. 2025, 58, 163. [Google Scholar] [CrossRef]
  227. Tripathi, A.M.; Mishra, A. Self-Supervised Learning for Environmental Sound Classification. Appl. Acoust. 2021, 182, 108183. [Google Scholar] [CrossRef]
  228. Zhang, C.; Li, Q.; Zhan, H.; Li, Y.; Gao, X. One-Step Progressive Representation Transfer Learning for Bird Sound Classification. Appl. Acoust. 2023, 212, 109614. [Google Scholar] [CrossRef]
  229. Zong, Y.; Aodha, O.M.; Hospedales, T.M. Self-Supervised Multimodal Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 5299–5318. [Google Scholar] [CrossRef]
  230. Essien, D.; Neethirajan, S. Multimodal AI Systems for Enhanced Laying Hen Welfare Assessment and Productivity Optimization. Smart Agric. Technol. 2025, 12, 101564. [Google Scholar] [CrossRef]
  231. Tong, L.; Fang, J.; Wang, X.; Zhao, Y. Cattle Welfare Assessment Based on Adaptive Fuzzy Logic and Multimodal Data Fusion. Front. Vet. Sci. 2025, 12, 1568715. [Google Scholar] [CrossRef]
  232. Janga, K.R.; Ramesh, R.; K, K.V. IoT-Based Multi-Sensor Fusion Framework for Livestock Health Monitoring, Prediction, and Decision-Making Operations. Int. J. Environ. Sci. 2025, 11, 1487–1495. [Google Scholar] [CrossRef]
  233. Jebari, H.; Mechkouri, M.H.; Rekiek, S.; Reklaoui, K. Poultry-Edge-AI-IoT System for Real-Time Monitoring and Predicting by Using Artificial Intelligence. Int. J. Interact. Mob. Technol. IJIM 2023, 17, 149–170. [Google Scholar] [CrossRef]
  234. Cakic, S.; Popovic, T.; Krco, S.; Nedic, D.; Babic, D.; Jovovic, I. Developing Edge AI Computer Vision for Smart Poultry Farms Using Deep Learning and HPC. Sensors 2023, 23, 3002. [Google Scholar] [CrossRef] [PubMed]
  235. Tong, Q.; Wang, J.; Yang, W.; Wu, S.; Zhang, W.; Sun, C.; Xu, K. Edge AI-Enabled Chicken Health Detection Based on Enhanced FCOS-Lite and Knowledge Distillation. Comput. Electron. Agric. 2024, 226, 109432. [Google Scholar] [CrossRef]
Figure 1. Age-dependent respiratory vulnerability in broiler chickens. Sources: day 0–3: [59,60], 4–7: [59], 8–14: [59,60], 15–20: [59,61], 21–28: [62,63], 29–35: [62,63,64], 36–42: [61,63,64].
Figure 1. Age-dependent respiratory vulnerability in broiler chickens. Sources: day 0–3: [59,60], 4–7: [59], 8–14: [59,60], 15–20: [59,61], 21–28: [62,63], 29–35: [62,63,64], 36–42: [61,63,64].
Ai 07 00058 g001
Figure 2. Mapping of poultry sound types to health and welfare applications. The figure summarizes evidence linking specific broiler sound categories (x-axis) to major health, environmental, growth, and welfare applications (y-axis). Hatch density indicates the relative strength of evidence reported in the literature (vertical line = weak, horizontal line = moderate, dots = strong). Evidence for each mapping is supported by studies reported in references: [24,25,31,36,48,49,52,53,56,58,65,66,67,95,96,97,98,99,100].
Figure 2. Mapping of poultry sound types to health and welfare applications. The figure summarizes evidence linking specific broiler sound categories (x-axis) to major health, environmental, growth, and welfare applications (y-axis). Hatch density indicates the relative strength of evidence reported in the literature (vertical line = weak, horizontal line = moderate, dots = strong). Evidence for each mapping is supported by studies reported in references: [24,25,31,36,48,49,52,53,56,58,65,66,67,95,96,97,98,99,100].
Ai 07 00058 g002
Figure 3. Time–frequency representations of broiler sounds. STFT spectrograms (left) and Mel-spectrograms (right) for healthy, noise, and unhealthy broiler audio samples (3 s duration each) from the poultry vocalization signal dataset [43].
Figure 3. Time–frequency representations of broiler sounds. STFT spectrograms (left) and Mel-spectrograms (right) for healthy, noise, and unhealthy broiler audio samples (3 s duration each) from the poultry vocalization signal dataset [43].
Ai 07 00058 g003aAi 07 00058 g003b
Figure 4. Combination of acoustic features and AI models for poultry health, welfare, and production monitoring. The figure illustrates reported feature–model–task pipelines linking acoustic representations (e.g., MFCCs, spectral and temporal descriptors, raw audio) with machine learning and deep learning models (e.g., Random Forest, kNN, CNN, Transformer, self-supervised models) for applications including respiratory disease detection, stress and welfare assessment, and growth monitoring. Evidence for each linkage is synthesized from the reviewed literature [26,31,36,48,49,52,53,56,58,67,70,91,95,96,97,98,101,108,139,146,158] and detailed in Table 5, Table 6, Table 7 and Table 8.
Figure 4. Combination of acoustic features and AI models for poultry health, welfare, and production monitoring. The figure illustrates reported feature–model–task pipelines linking acoustic representations (e.g., MFCCs, spectral and temporal descriptors, raw audio) with machine learning and deep learning models (e.g., Random Forest, kNN, CNN, Transformer, self-supervised models) for applications including respiratory disease detection, stress and welfare assessment, and growth monitoring. Evidence for each linkage is synthesized from the reviewed literature [26,31,36,48,49,52,53,56,58,67,70,91,95,96,97,98,101,108,139,146,158] and detailed in Table 5, Table 6, Table 7 and Table 8.
Ai 07 00058 g004
Figure 5. Current limitations of audio-based AI systems for respiratory health and welfare monitoring in broiler chickens.
Figure 5. Current limitations of audio-based AI systems for respiratory health and welfare monitoring in broiler chickens.
Ai 07 00058 g005
Figure 6. Future research directions and technological improvements for audio-based AI monitoring in broiler production.
Figure 6. Future research directions and technological improvements for audio-based AI monitoring in broiler production.
Ai 07 00058 g006
Table 1. Major respiratory and welfare challenges in broiler chickens and their acoustic relevance.
Table 1. Major respiratory and welfare challenges in broiler chickens and their acoustic relevance.
ChallengeStageKey SoundsProduction ImpactAcoustic Detection Potential
Viral respiratory diseases (IBV, NDV, AI)2–4 weeks onwardCough, rales/snore, abnormal breathingReduced growth, poor FCR, mortalityHigh—cough/rales classifiers (MFCC, entropy) reach 80–95% accuracy [21,25,48,52]
Mixed respiratory syndromes & poor air qualityMid–late grow-outCough, snore, occasional sneeze; spectral shiftsChronic disease, lower weight, carcass downgrading [3,65,66]High—MFCC and spectral trends track gas levels and respiratory stress [48,65,66]
Early-life environmental stressDays 1–4High-rate, high-energy distress callsPredicts reduced weight gain and higher mortality [67]High—spectral entropy quantifies distress reliably [67]
Prolonged fasting/delayed access to feed and waterFirst 24–48 hIncreased distress vocalizationsImpaired early growth and immunity [68]High—vocal monitoring discriminates stressed chicks [68]
Thermal stressWhole cycleStress/distress calls, activity changeReduced performance, higher mortality [53,66,69]High—ANN models classify heat/cold stress calls [69,70]
High stocking density/social stressMid–late grow-outAlarm/distress callsGrowth reduction, leg problems, stress [53,71,72]Moderate–high—energy and call patterns reflect social stress [53,69]
General welfare problems (pain, disease, thermal or environmental discomfort)Whole cycleDistress calls or silenceReduced growth, welfare deterioration, higher mortality and culls [72,73,74]High—ML frameworks classify health and stress sounds [25,36,75]
IBV, Infectious Bronchitis Virus; NDV, Newcastle Disease Virus; AI, Avian Influenza; MFCC, Mel-Frequency Cepstral Coefficients; ANN, Artificial Neural Network; ML, Machine Learning.
Table 2. Principal chicken vocal structures and their roles.
Table 2. Principal chicken vocal structures and their roles.
StructureRole in Sound GenerationReferences
Lateral & medial tympaniform membranes/labiaVibrate to create the primary sound source during expiration (and sometimes inspiration)[78,79,81]
Tracheal rings & tympanumFused distal tracheal rings form a rigid “drum” that shapes and supports the syrinx lumen[77,78,82]
Pessulus (midline bar)Splits airflow between bronchi, anchors vibrating tissues, shapes glottic opening[77,78,82]
Tracheobronchial & sternotracheal musclesAdjust membrane/labial tension and syringeal shape, controlling pitch and amplitude[80,81,82,83]
Larynx, trachea, mouth, pharynx, beakAct as the vocal tract filter, modifying resonance and spectral qualities of the sound[76,80,84,85]
Table 3. Acoustic characteristics of broiler chicken sound categories.
Table 3. Acoustic characteristics of broiler chicken sound categories.
Sound CategoryTypical DurationDominant Frequency RangeTemporal PatternsMajor Confounding Noises
Cough~0.2–0.4 s [91]Relatively low-pitched compared with normal cheeps; within overall broiler vocal band that decreases from ~2.5–4.4 kHz in week 1 to ~1.1–2.5 kHz in week 8 [91,95]Isolated or in short bouts; abnormal, rough, low croak, not strongly repetitive like distress calls [91]Fan noise (0.1–1.4 kHz), feeder/heater broadband noise up to >18 kHz, overlapping normal vocalizations [52,95,101]
Sneeze<0.2 s, impulsive [49]Broad-band transient with energy overlapping bird vocal band (~1–4 kHz) plus higher harmonics [49,95]Discrete, impulsive clicks or short bursts; irregular in time, not organized into long series [49]Continuous fans, feeders, heaters, wing flapping and dustbathing (0–18 kHz) can mask or be mis-detected as sneezes [49,95]
Distress call~0.18–0.22 s per call [96]; highly repeatedFundamental typically ~2.8–3.0 kHz in week 1–3, decreasing to ~2.2 kHz by week 4 [96]; early-life distress is loud (up to ~92 dB) [67]Repetitive, high-intensity series; descending frequency contour (negative frequency trend ≈ −20 to −65 Hz/10 ms) [56,67,96] Overlapping contact/pleasure calls and general bird vocalization band (~1–4 kHz); low-frequency fan noise removed by filtering in practice [56,67,95]
Normal vocalizations (contact, pleasure, warbles)PN: ~0.15–0.25 s; warbles slightly longer [56,58,96]Peak frequencies mostly ~2.7–4.3 kHz in chicks, depending on call type [58]; PN often softer, similar band [56,58,96]Short peeps: descending frequency, lower energy; PN: short, ascending frequency, low energy; warbles: repeated low, bow-shaped contours; generally non-repetitive at high rate like distress [53,56,58,96]Equipment sounds (fan, feeder, heater) and behavioral sounds (wing flapping, dustbathing) share overlapping spectra up to >18 kHz, complicating isolation of specific normal call types [53,95]
Silence-Background dominated by equipment bands: fan (~0.1–1.4 kHz), feed system and heater broadband (0–18 kHz) [95]Continuous or quasi-continuous noise floor with no discrete call envelopes; used as negative class in classifiers [31,49,95,101] All machinery (fans, feeders, heaters), human activity, and other birds’ behavioral sounds (wing flapping, dustbathing) [31,49,95,101]
s, second; ms, millisecond; Hz, hertz; kHz, kilohertz; dB, decibel; PN, pleasure notes.
Table 4. Audio data acquisition strategies in broiler chicken studies.
Table 4. Audio data acquisition strategies in broiler chicken studies.
Study AimEnvironment & ScaleMicrophone SetupSR (kHz)Recording ModeAnnotation Type Main Limitations Ref.
Vocalization patterns across age Single farm, 27,940 Cobb 500 Two directional mics (Sennheiser ME66, ME64) on tripods, 0.4–0.8 m above floor, aimed at birds - Intermittent (1 h every 2 days) Clip-based manual labelling in audio software Sparse temporal coverage; mixed sound sources[108]
Growth prediction from vocal peaksMulti-farms, 27,500 broilers PLF sound monitoring system (fixed nodes; details limited) in broiler houses 22.05 Continuous across cycles Effectively clip/aggregate-based (peak frequency statistics), automated acquisition Proprietary system; no mic detail; house-level only[114]
CO2 vs. vocal featuresMulti-farms, 166,000 broilers Superlux ECM999 measurement mic, central, 0.7 m height 22.05 Continuous (40 days) Clip-based (feature extraction from 10 min segments, no event labels) No event labels; strong background noise[65]
Management noise profiling Single farm, 25,000 broilers Low-cost mic on Raspberry Pi, ~1 m from birds 44.1 Continuous (first 9 days) Clip-based manual labelling of noise events (fan, feed, water) Short study window; machinery-focused[115]
Sound source characterization Single farm, 13,700 Ross 708 Zoom H2n recorder mid-house, 40 cm above birds’ backs; ceiling camera 44.1 Continuous (subset labeled) Event-based manual labelling of six sound types Small labeled subset; overlapping sounds[95]
Feed pecking detection Experimental pen, 10 Ross 308 Mic attached to feeder; top-view camera 44.1 Continuous Event-based (peck vs. non-peck) using sound + video Pen-scale only; feeder-local[116]
Feed intake estimation Experimental pen, Ross 708 broilers Piezoelectric sensors on feeders (contact “microphones”) 48 Continuous (19 days) Event-based (feed-peck vs. non-peck) Only feeder events captured[117]
Multi-call recognizer Experimental pen, 10 Ross 308 Low-cost node with Knowles FG-23329 mic, 90 cm above floor, central 48 Continuous Event-based manual labelling of individual calls within clips Preselected clips; not commercial[96]
Daily vocal rhythm study Experimental pen, 1680 Ross 308 Knowles FG-23329 mics, 1.5 m height, center of each pen 48 Continuous 24/7 Event-based automatic recognizer (five vocal classes) Dependent on recognizer accuracy[56]
Health status dataset Experimental pen, 100 broilers Mics “reasonable distance away” from birds 96 Repeated daily sessions Clip-based (healthy vs. unhealthy) Vague placement; no event labels[25]
Hatchery stress/fasting Single hatchery, 40,000 chicks Continuous recording system (details not fully specified) 48 Continuous Clip-based (acoustic features over time windows) No event labels; machine noise[68]
Social isolation study Experiential pen, 30 Cobb broilers Unidirectional mic + digital recorder - Short sessions Event/clip-based (isolation vs. group calls) Highly controlled; not farm-like[53]
Hatchery noise mapping Single hatchery, 28,000 eggs Not fully specified; industrial machine context 48 Continuous Clip-based (high vs. low vocal activity zones) Sparse hardware detail[118]
SR, Sampling Rate; kHz, kilo Hertz; PLF, Precision Livestock Farming; CO2, carbon dioxide; mic, microphone; m, meter; h, hour; min, minute; s, second; wav, waveform audio file format; flac, Free Lossless Audio Codec; Pi, Raspberry Pi single-board computer; d, day.
Table 5. Audio features used for respiratory and welfare sound classification in broilers.
Table 5. Audio features used for respiratory and welfare sound classification in broilers.
Feature TypeInput RepresentationTarget SoundsAdvantagesLimitationsReferences
MFCCsFrame-level cepstral vectors (with Δ, ΔΔ)Cough, snore, respiratory and stress soundsWidely validated; compact and discriminative; effective for health vs. disease classificationCapture mainly spectral envelope; limited temporal detail; noise-sensitive without preprocessing[26,31,36,48,91,139,146]
Wavelet-based MFCC variantsWavelet transform + MFCCsRespiratory abnormal soundsBetter time–frequency resolution; higher accuracy than MFCCs for cough detectionHigher dimensionality; more complex design; less standardized[48]
Multi-domain handcrafted featuresCombined time, frequency, MFCC, sparse featuresCough, crow, wing flap; health statusRich signal description; supports high multi-class accuracyRequires careful feature selection; risk of overfitting[31,91,92]
Mel-spectral features/spectrogramsMel spectra or Mel spectrogramsRales and disease-related soundsPerceptually meaningful; well suited for CNN/SVM modelsSensitive to barn noise and overlapping sounds[24,33,139]
General spectral descriptorsSpectral centroid, entropy, bandwidth, etc.Stress, welfare, vaccine responseIntuitive physical meaning; spectral entropy tracks distress wellLimited specificity; confounded by machinery noise[26,36,67,146]
Temporal featuresEnergy, duration, zero-crossingWelfare and stress callsSimple and computationally lightWeak alone; noise-sensitive[26,36,53,97]
Sparse representation featuresDictionary-based coefficientsCough and abnormal soundsHighlight salient structures; improve robustnessHigher computation; harder to interpret[31,101]
Global acoustic indicesFlock-level entropy or activityChick distress and welfareVery simple; real-time flock monitoringNo call-level classification[67]
MFCC, Mel-Frequency Cepstral Coefficients; Δ, first-order temporal derivative; ΔΔ, second-order temporal derivative; SVM, Support Vector Machine; ELM, Extreme Learning Machine; CNN, Convolutional Neural Network.
Table 6. AI models applied to broiler respiratory and welfare sound classification.
Table 6. AI models applied to broiler respiratory and welfare sound classification.
Model FamilyInput FeaturesTarget TaskReported
Performance
Main Limitations Ref.
Random Forest (RF)Multi-domain handcrafted (time–frequency, MFCC, sparse features)Health monitoring (cough rate, vocal types)91–99% accuracyHandcrafted features; farm-specific; class imbalance[31]
kNNSelected multi-domain featuresSound recognition (crow, cough, purr, wing)Up to 94% frame-level accuracyLimited classes; requires labeled frames[91]
RF + transfer learningSame multi-domain features across age groupsCross-age health estimation83–91% accuracy; identification 100%Performance drops under domain shift[26]
RF + sparse filteringMulti-domain features with sparse preprocessingFarm-level sound classification~94% prediction accuracySensitive to filtering design[101]
CNN + Burn LayerRaw/processed audioRespiratory health vs. noise98.6% accuracySmall dataset; limited farm validation[33]
CNNSpectrogramsWelfare vocal types91% balanced accuracySmall sample; limited breeds[96]
Transformer ANNRaw waveform ± ageEnvironmental stressUp to 0.97 mAPGeneralization unclear[70]
Mixed ML/DLMFCC + spectral + temporalWelfare and behaviorHigh across tasksLimited dataset; noise sensitivity[36]
Review (multiple models)Health, welfare, behavior>95% accuracy in controlled settingsPoor benchmark standardization[139]
RF (overlapping sounds)Multi-domain features + confidence statsOverlapping sound recognition~100% accuracy (controlled)Single-farm only[93]
WMFCC + HMMWavelet MFCCsAbnormal respiratory sounds~94% accuracyNo cross-farm validation[48]
RF, Random Forest; kNN, k-Nearest Neighbour; CNN, Convolutional Neural Network; ANN, Artificial Neural Network; MFCC, Mel-Frequency Cepstral Coefficients; WMFCC, Wavelet-based MFCC; MCC, Matthews Correlation Coefficient; HMM, Hidden Markov Model; mAP, Mean Average Precision.
Table 7. Audio-based respiratory disease detection studies in broiler chickens.
Table 7. Audio-based respiratory disease detection studies in broiler chickens.
StudyDisease FocusSound TypeModel ApproachDetection GoalDeployment LevelRef.
Abnormal sound detection in commercial broiler houseGeneral respiratory diseases (ND, AI, IB)Cough, snoreWMFCC + HMMEarly flock-level respiratory monitoringCommercial house (experimental)[48]
Sound-based poultry health monitoring—sneeze detectionRespiratory diseases with sneezingSneezeSpectral subtraction + handcrafted features + binary classifierEarly warning via sneeze rateResearch setting[49]
Rale detection for early diseaseVaccine-induced respiratory diseaseRaleMel-spectral features + ELM/SVMEarly disease detectionExperimental farm[24]
Acoustic features of vocalization in poultry health monitoringIB, NDGeneral vocalizationsMFCC, wavelet features + classical MLGroup-level health discriminationControlled challenge[52]
Intelligent device for diagnosing avian diseasesND, IB, AIIndividual vocalizationsFFT/DWT + SVM + evidence fusionPathogen diagnosisLaboratory prototype[134]
White-feather broiler health monitoring based on sound & transfer learningRespiratory health via cough rateMulti-sound typesMulti-domain features + RF + TrAdaBoostContinuous health monitoringFarm-like environment[26]
Broiler health monitoring using sound features & Random ForestRespiratory healthCough vs. non-cough60D features + RFFlock-level early warningFarm recordings[31]
Respiratory disease identification of chicken voiceprintGeneral respiratory diseasesAbnormal breathing soundsML-based diagnostic deviceDisease identificationPrototype device[183]
Chicken respiratory dataset for MLRespiratory illnessCough, snore, raleDataset onlySupport ML researchResearch farm[25]
ND, Newcastle disease; IB, Infectious bronchitis; AI, Avian influenza; MFCC, Mel-Frequency Cepstral Coefficients; WMFCC, Wavelet-based MFCC; FFT, Fast Fourier Transform; DWT, Discrete Wavelet Transform; HMM, Hidden Markov Model; SVM, Support Vector Machine; ELM, Extreme Learning Machine; RF, Random Forest.
Table 8. Welfare-related sound monitoring in broiler chickens.
Table 8. Welfare-related sound monitoring in broiler chickens.
Welfare IndicatorKey Sound PatternsAI ApproachPractical RelevanceMain Research GapsReferences
General distress/stressHigh-rate distress calls; low spectral entropy; shifts in spectral centroid and bandwidthSpectral entropy thresholds; ML/DL on MFCC and spectral featuresEarly warning of poor welfare, low growth, high mortalityFarm-level triggers poorly understood; weak mapping to specific affective states[36,66,67,68]
Thermal and air-quality stressVocal changes linked to CO2 and temperature; altered intensity and MFCC bandsFeature selection + supervised ML (SVM, RF, DL)Continuous monitoring of climate and gas buildup using soundNeed multi-farm validation; confounding effects of humidity and noise[36,65,66,67,97]
Respiratory healthCough, sneeze, snore rate in flock audioSignal processing + ML (RF, kNN, classifiers)Non-invasive early disease detectionLack of standardized disease-specific acoustic markers[31,36,47]
Growth and body conditionPeak frequency decreases with age and body weightStatistical regression; PLF integrationAutomated growth trend trackingSensitive to noise and housing acoustics; limited real-time systems[58,95,98,108]
Positive welfare/social comfortPleasure notes, warbles; reduced energy in grouped birdsCNN call classification; RF energy patternsDetection of positive states beyond absence of distressFew labeled positive datasets; poorly characterized in older broilers[36,53,96,192]
Behavioural rhythmsDaily call-type distributions vary with age and timeCNN + LSTM temporal modellingBaseline profiles for anomaly detectionWeak linkage to actionable management decisions[36,56,95,96]
Night-time disturbanceAbnormal night vocal burstsMicrophone arrays; TDOA localizationDetection of injuries or equipment failuresMostly studied in layers; broiler-specific thresholds lacking[98,99,100]
AI, Artificial Intelligence; ML, Machine Learning; DL, Deep Learning; MFCCs, Mel-Frequency Cepstral Coefficients; CNN, Convolutional Neural Network; LSTM, Long Short-Term Memory Network; RF, Random Forest; SVM, Support Vector Machine; kNN, k-Nearest Neighbour; CO2, carbon dioxide; PLF, Precision Livestock Farming; TDOA, time difference of arrival.
Table 9. Evaluation metrics and deployment considerations for broiler audio-AI systems.
Table 9. Evaluation metrics and deployment considerations for broiler audio-AI systems.
Metric/AspectWhat It MeasuresSuitability for Farm UseCommon Shortcomings
Accuracy/Balanced accuracyOverall proportion of correctly classified events or calls [31,33,36,96,202] Simple headline indicator for disease, cough, stress or feeding detectionMasks class imbalance; can remain high while rare critical events are missed [36,203]
Sensitivity/RecallFraction of true positive events detected (e.g., coughs, stress calls) [31,33,202]Critical for early disease and welfare alarmsOften unreported per class; high recall may increase false alarms [33,203]
SpecificityFraction of non-events correctly classified as normal [31,33,202]Prevents alarm fatigue in noisy housesHigh specificity may hide poor sensitivity for rare problems [33,203]
Precision/Positive predictive valueProportion of alerts that are true positives [33,36,202]Directly linked to farmer trust and workloadDrops in low-prevalence settings; farm-level rates rarely reported [36,203]
F1-score/Matthews correlation coefficientCombined indices of precision–recall (F1) or full confusion balance (MCC) [33,36,202,203]Useful for comparing models under imbalanced dataAbstract for practitioners; rarely linked to real business outcomes [36,139,203]
Signal-quality metrics (SNR, RMSE)Effectiveness of denoising and filtering [47]Guides microphone placement and preprocessing designOptimized mainly in lab settings; weak linkage to end-to-end performance [47,139]
Task-specific indices (cough, feed-intake)Derived indicators (e.g., cough rate, feed intake error) [31,117]Highly interpretable for veterinarians and managersDefinitions and thresholds vary widely across studies [31,117,139]
Cross-validation/external validationGeneralization across farms, ages, environments [36,70,96]Essential for real-world robustnessMany studies use single-site datasets only [36,96]
Interpretability/explainabilityAbility to explain alerts [36,139]Supports decision-makingDeep models remain largely black boxes [36]
Edge feasibility (model size, compute, power)Model size, latency, power consumption [117,139,202,204]Determines practical on-farm deploymentMany high-accuracy models exceed TinyML limits [139,202,204]
Network and integration constraintsCommunication reliability and bandwidth [117,139,205]Affects scalability and sensor fusionEnd-to-end system behavior rarely evaluated [139,205]
AI, Artificial Intelligence; ML, Machine Learning; CNN, Convolutional Neural Network; SNR, Signal-To-Noise Ratio; RMSE, Root Mean Square Error; F1 score, harmonic mean of precision and recall; MCC, Matthews Correlation Coefficient; TinyML, deployment of machine-learning models on ultra-low-power microcontrollers.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sharifuzzaman, M.; Mun, H.-S.; Lagua, E.B.; Hasan, M.K.; Kang, J.-G.; Kim, Y.-H.; Mehtab, A.; Park, H.-R.; Yang, C.-J. Advances in Audio-Based Artificial Intelligence for Respiratory Health and Welfare Monitoring in Broiler Chickens. AI 2026, 7, 58. https://doi.org/10.3390/ai7020058

AMA Style

Sharifuzzaman M, Mun H-S, Lagua EB, Hasan MK, Kang J-G, Kim Y-H, Mehtab A, Park H-R, Yang C-J. Advances in Audio-Based Artificial Intelligence for Respiratory Health and Welfare Monitoring in Broiler Chickens. AI. 2026; 7(2):58. https://doi.org/10.3390/ai7020058

Chicago/Turabian Style

Sharifuzzaman, Md, Hong-Seok Mun, Eddiemar B. Lagua, Md Kamrul Hasan, Jin-Gu Kang, Young-Hwa Kim, Ahsan Mehtab, Hae-Rang Park, and Chul-Ju Yang. 2026. "Advances in Audio-Based Artificial Intelligence for Respiratory Health and Welfare Monitoring in Broiler Chickens" AI 7, no. 2: 58. https://doi.org/10.3390/ai7020058

APA Style

Sharifuzzaman, M., Mun, H.-S., Lagua, E. B., Hasan, M. K., Kang, J.-G., Kim, Y.-H., Mehtab, A., Park, H.-R., & Yang, C.-J. (2026). Advances in Audio-Based Artificial Intelligence for Respiratory Health and Welfare Monitoring in Broiler Chickens. AI, 7(2), 58. https://doi.org/10.3390/ai7020058

Article Metrics

Back to TopTop