A Task-Adaptive Parameter Transformation Scheme for Model-Agnostic-Meta-Learning-Based Few-Shot Animal Sound Classification

: Deep learning models that require vast amounts of training data struggle to achieve good animal sound classification (ASC) performance. Among recent few-shot ASC methods to address the data shortage problem regarding animals that are difficult to observe, model-agnostic meta-learning (MAML) has shown new possibilities by encoding common prior knowledge derived from different tasks into the model parameter initialization of target tasks. However, when the knowledge on animal sounds is difficult to generalize due to its diversity, MAML exhibits poor ASC performance due to the static initialization setting. In this paper, we propose a novel task-adaptive parameter transformation scheme called few-shot ASC. TAPT generates transformation variables while learning common knowledge and uses the variables to make parameters specific to the target task. Owing to this transformation, TAPT can reduce overfitting and enhance adaptability, training speed, and performance in heterogeneous tasks compared to MAML. In experiments on two public datasets on the same backbone network, we show that TAPT outperforms the existing few-shot ASC schemes in terms of classification accuracy, and in particular a performance improvement of 20.32% compared to the state-of-the-art scheme. In addition, we show that TAPT is robust to hyperparameters and efficient for training.


Introduction
Animal sound classification (ASC) has emerged as a crucial tool in wildlife monitoring systems, as it can identify different animal species based on their unique sounds [1].ASC is particularly useful when visual identification is challenging, such as regarding small, nocturnal, and camouflaged animals [2].Recently, deep learning-based models, such as convolutional neural networks (CNNs), have demonstrated superior performance in ASC as well as in other signal processing applications [3,4].
However, supervised deep learning ASC models (DeepASC) require huge amounts of accurately labeled data, and acquiring such data is an expensive and time-intensive process [5].Insufficient animal sound data for DeepASC may result in poor classification performance due to overfitting or generalization failure [6][7][8].This lack of labeled data is particularly severe in ASCs due to (1) the diversity of species and sounds, (2) limited access to certain habitats, especially remote or protected areas where specific species reside, (3) the time-consuming labeling process of animal sounds, and (4) the availability of experts, especially for animals that are difficult to observe (e.g., rare or endangered species).In these cases, DeepASC may not work effectively.
To mitigate this problem, few-shot learning that involves learning new tasks from only a small number of samples has garnered considerable attention.Recently, meta-learning, or learning to learn, has been widely used as one of the noteworthy methodologies for few-shot learning.Meta-learning allows deep learning-based classification models to learn common prior knowledge shared across different tasks.Starting from the common knowledge, the classification models can easily learn specific knowledge about unseen tasks even with limited data samples.
Model-agnostic meta-learning (MAML), one of the most successful meta-learning methods, embeds the common knowledge derived from various tasks into the parameter initialization of the model [9].Pretrained parameter initialization serves as a good starting point to overcome the data shortage problem and achieve good performance.Due to these properties, many recent works such as few-shot image classification [9], anomaly detection [10], and influenza forecasting [11] have aimed to achieve better pretrained generalization with MAML.
MAML is particularly useful in ASC because it enables DeepASC to learn common knowledge across various ASC source tasks, effectively adapting to specific knowledge of the target ASC task.Here, the target task is a classification task for rare or obscure animal species for which obtaining large amounts of labeled data is difficult.However, if the knowledge of the source tasks is too diverse, MAML often fails to generalize their common knowledge.As a result, the prior knowledge encoded in this failed generalization may be useful in some tasks but not in other tasks.This can be overcome by accommodating both task-wide general and task-specific knowledge into MAML-initialized parameters.
To carry out that, in this study, we propose a novel task-adaptive parameter transformation (TAPT) scheme that directly transforms the initial parameters of DeepASC according to the suitability of each task during the meta-learning.For parameter transformation, we consider gradients of initial parameters of DeepASC as the suitability to the task and use them as input to a regression model to learn task-specific knowledge.Then, we generate transformation variables from the regression model as output while generalizing common knowledge across tasks.Finally, these variables are used to adapt the initial parameters of DeepASC to the target task.Unlike traditional MAML, where initialization remains static across tasks, TAPT dynamically transforms the initial parameters based on task-specific knowledge.This property makes TAPT more flexible than MAML and more effective in classifying sounds of diverse species.To prove the effectiveness of the proposed scheme, we compared it with other few-shot ASC schemes in terms of ASC accuracy.We also analyzed the robustness of TAPT to hyperparameters and the training efficiency of TAPT and original MAML through the convergence speed of training accuracy.
The contributions of this paper are summarized as follows: Appl.Sci.2024, 14, 1025 3 of 12

Related Works
In this section, we first present a brief literature review on deep learning-based ASC, and then introduce several few-shot learning schemes to overcome the data shortage situation in ASC.
First of all, Şaşmaz et al. proposed a deep learning-based framework for classifying the sounds of various animal species, such as birds, cats, and dogs [12].They collected 875 sound samples from 10 different animal species from an online sound source site and preprocessed animal audio data into mel-spectrograms.They then constructed three convolution layers with a max pooling layer and trained this model to classify target animal species.
Xie et al. proposed a bird sound classification structure that can incorporate acoustic features, visual features, and generalized features from a deep learning model; the first two features were obtained using traditional classifiers K-nearest neighbor and random forest, respectively, and the last generalized features were extracted from a three-layer CNN model [3].Finally, the bird species is identified by incorporating these three features using a late fusion technique.
On the other hand, Zhang et al. proposed a method that can achieve outstanding bird sound classification accuracy based on deep CNNs (DCNNs) [4,13].Specifically, they calculated spectrograms of the short-time Fourier transform, Mel-frequency transform, and Chirplet transform for animal sounds, constructed individual DCNN models for each spectrogram, and predicted bird species by combining the features from the DCNN models.Furthermore, they used a transfer-learning (TL) scheme to reduce the number of trainable parameters of the fusion model.
Liao et al. proposed a domestic pig sound classification model called TransformerCNN [14].The model consists of two network modules, CNN and Transformer, in a parallel structure [15].They decided that the spatial features extracted using a 4conv CNN (CNN with four convolutional layers) were not sufficient for ASC, and added sequential coding of the Transformer module to capture global features from the input spectrogram.The parallel structure model of the two modules effectively extracted richer information from different signals than a single structure model, and showed excellent performance in pig sound recognition.
The aforementioned deep learning-based ASC models usually require a large amount of labeled training data to achieve good performance.In addition, these models can only classify the animal species they are trained on.However, in the case of animals that are difficult to observe, the few-shot learning scheme, which learns how to classify animal species with a few data samples, has attracted great attention in situations of data shortage.For instance, Shi et al. proposed a few-shot acoustic event detection scheme based on three supervised learning schemes and three meta-learning schemes [16]: MetaOptNet [9], MAML [17], and Prototypical Networks (ProtoNet).For the Audioset [18] dataset containing music and animal sounds, they compared those learning schemes in 5-way 1-shot and 5-way 5-shot settings.As a result, all meta-learning schemes outperformed the supervised learning schemes, and ProtoNet was the best among the meta-learning schemes.
Meanwhile, many high-ranked methods for few-shot bioacoustics event detection presented in the DCASE 2022 challenge [19] used ProtoNet as a learning scheme for training CNN-based ASC models.Here, bioacoustics event detection refers to locating animal sounds in audio recordings and classifying animal species.Although bioacoustics event detection is slightly different from ASC, this indicates that ProtoNet is considered an effective model for ASC.

Proposed Scheme
In this section, we first describe the data collection and preprocessing process.Next, we present an MAML-based few-shot ASC process.Finally, the learning process of the proposed scheme is presented in detail.Figure 1 and Algorithm 1 show the overall architecture of TAPT and the meta-training process of TAPT, respectively.where j is the layer index and l is the number of layers of a network 5: while meta-learning epochs do 6: Sample a batch from task T i 7: for each task T i do 8: Sample data samples Generate layer-wise transformation parameters (γ, β) Compute transformed initial parameters:

15:
for number of inner-loop updates do Perform gradient descent to compute transformed parameters: end for 21: Perform gradient descent to update weights:

Proposed Scheme
In this section, we first describe the data collection and preprocessing process.Next, we present an MAML-based few-shot ASC process.Finally, the learning process of the proposed scheme is presented in detail.Figure 1 and Algorithm 1 show the overall architecture of TAPT and the meta-training process of TAPT, respectively.

Data Collection and Preprocessing
We first collect original waveform data D w from animal sound databases or bioacoustics sensors.To use animal sounds as input to DeepASC, preprocessing into a spectrogram is essential.Compared to the original waveform, a spectrogram contains a frequency-time representation (i.e., 2D visual representation) that shows how the frequency content of a sound signal changes over time.The preprocessing part (Line 1 in Algorithm 1) is organized as follows: (i) all sound segments in the datasets are sampled at a sampling rate of 16 kHz and (ii) padded to a length of one second.(iii) Short-time Fourier transformation (STFT) is conducted on the raw waveform with an FFT size of 256, a window size of 128, and a hop size of 128 to obtain 128 × 87 spectrograms that constitute D w '.

MAML-Based ASC
In this section, we formulate the MAML algorithm for ASC.First, we divide whole animal classes into meta-training set M train and meta-test set M test .The meta-training and meta-test processes use M train and M test , respectively.Then, we sample the ASC task from both M train and M test .Here, "task" represents ASC for m animal sound classes with k samples (i.e., m-way k-shot).From M train , we sample S task , which represents a set of meta-training tasks consisting of N different tasks, T 1 , . . ., T N (Line 2).From the M test , we sample T target , which implies the target ASC task.
In the meta-learning stage, MAML encodes common prior knowledge derived from the meta-training task set S task into the initial parameters θ of DeepASC f θ .This initialization can be used as a good starting point and allows f θ to quickly adapt to an unseen target task, T target , in the adaption stage.In the original MAML, this stage consists of two loops, an inner loop and an outer loop.In the inner loop (Line 15-17), the weights of f θ are adapted to T i for a small number (k) of animal sound samples, D T i (support set from T i ), and a loss function L T i as follows: In the outer loop (Line 21), the model f θ is evaluated for unseen-animal sound samples, D′ T i (query set from T i ), to provide feedback on the generalization performance.This feedback is used to update the initial θ for all the S task tasks to achieve a better generalization of the common knowledge as follows: After the meta-learning stage, DeepASC f θ learns the specific knowledge of the target classification task T target during the adaptation stage.Here, the classes used in this stage are unseen in the meta-learning stage, and MAML allows the model to quickly adapt to the specific knowledge, starting from the initial θ.In this stage, fine-tuning is conducted according to Equation (1) using a small number of samples, D T target (support set), obtained from the target task to make f θ into f θ target , a model trained with the specific knowledge of the target task.After the adaptation stage, f θ target is used to classify the new samples (query set) obtained from the target task.

Task-Adaptive Parameter Transformation
In the meta-learning stage, we consider N tasks, each consisting of n animal sound samples with different characteristics (knowledge).When these tasks have very diverse knowledge, MAML may have difficulty in generalizing them into common knowledge.Initial parameters set through such generalization may result in poor ASC performance.To address this, we convert the initial parameters θ directly to task-specific parameters θ according to task suitability.The task-specific parameters can provide better ASC performance compared to the original MAML by considering task-specific knowledge as well as common knowledge.
Two essential factors to be determined in the process are (i) the suitability of the initial parameters θ to each task and (ii) the amount of transformation according to the suitability.
In order to assess the suitability of θ to the i-th task T i , we use gradients, ∇ θ L ).Although gradients are typically utilized to update parameters via a gradient descent, they also contain information about optimization and parameter quality.Thus, gradients can also be used to represent the meta-information (i.e., suitability) of parameters to a task [20,21].
In order to obtain an optimal set of transformations of the initial parameters, we construct regression models.The new models g ∅ γ and g ∅ β (parameterized by ∅ γ and ∅ β , respectively) take the gradients of T i as inputs and generate transformation variables γ i and β i .These variables are used for transforming the task-wide initial parameters θ into task-specific parameters θ i of T i (Line 10-13).Here, the two models g ∅ γ and g ∅ β are two-layer multilayer perceptrons, with ReLU and tanh as the activation functions at the end, respectively.Further, γ i = ω j i and β i = β j i are sets of the layer-wise transformation variables of T i for the j-th layer of the DeepASC model parameter θ i .
In the adaptation stage, we transform the task-wise parameters θ, which are adaptive to T target , with the gradients ∇ θ L T target ( f θ ) and conduct fine-tuning using Equation (1).

Datasets
To evaluate the effectiveness of TAPT, we performed various experiments using two public animal sound datasets, BirdVox-14SD and ANAFCC [22], with significantly different class distributions.BirdVox-14SD contains 6600 h of audio derived from 37 classes of avian animals (e.g., birds and insects) and collected from ten autonomous recording units located in Ithaca, New York, USA.Among these classes, we excluded 16 classes that contained audio of unknown animal species or audio of several species, and used the remaining 21 classes for the experiments.ANAFCC contains short audio waveforms of bird flight calls derived from 27 classes.In this case, we excluded 12 classes for the same reason as in BirdVox-14SD and used the remaining 16 classes.
The dataset is divided into 2 subsets: the meta-training set M train (from which S task is extracted) and the meta-test set M test (from which T target is extracted).We first sorted the classes of the datasets according to the number of data samples.Then, to represent the data shortage situation, we used the classes with a small number of samples as the meta-test set and the remaining classes as the M train .As a result, 15 classes and 6 classes for BirdVox-14SD and 10 classes and 5 classes for ANAFCC were used as M train and M test , respectively.Table 1 shows the number of classes and samples in M train and M test of each dataset; Table 2 presents classes in the datasets and their corresponding animal species; and Figure 2 illustrates the number of data samples from each class in the datasets.

Experimental Settings
For ASC comparison, we considered five learning schemes, CNN, TL, MAML, Pro-toNet, and TAPT, and two backbone networks (DeepASC), 4conv and VGG11.The 4conv network consists of four layers, each of which contains thirty-two 3 × 3 convolutional filters, a batch normalization function, a ReLU activation function, and a 2 × 2 max pooling function.On the other hand, VGG11 consists of eleven layers: eight 3 × 3 convolutional layers with max pooling function and three fully connected layers [23].
All learning schemes except CNN pretrained the backbone network using data ob-

Experimental Settings
For ASC comparison, we considered five learning schemes, CNN, TL, MAML, Pro-toNet, and TAPT, and two backbone networks (DeepASC), 4conv and VGG11.The 4conv network consists of four layers, each of which contains thirty-two 3 × 3 convolutional filters, a batch normalization function, a ReLU activation function, and a 2 × 2 max pooling function.On the other hand, VGG11 consists of eleven layers: eight 3 × 3 convolutional layers with max pooling function and three fully connected layers [23].
All learning schemes except CNN pretrained the backbone network using data obtained from the meta-training set S task according to the training process of each scheme and conducted fine-tuning using data obtained from T target .In the case of CNN, the backbone network was trained using the animal sound data derived from the target task T target .
For a fair performance comparison, all learning schemes utilized an identical preprocessing process and hyperparameter values.For example, the meta batch size, learning rates (α and η), and number of inner-loop updates were 4, 0.001, and 5, respectively.Here, we used the meta batch size and number of inner-loop updates from [5] and set the learning rate empirically.Furthermore, we set the epoch of meta-learning (pretraining in the case of TL) and adaptation to 50 and 10, respectively, while each learning scheme can be stopped early during meta-learning.

Few-Shot ASC Performance
Few-shot ASC was performed in typical settings such as 5-way 1-shot classification and 5-way 5-shot classification.As an evaluation metric, we used accuracy, which represents the ratio of the number of correct samples to the total number of samples.
Tables 3 and 4 present the accuracy comparison results of the learning schemes for the BirdVox-14SD dataset and the ANAFCC dataset, respectively.Here, the accuracy values represent the mean and 95% confidence interval values for ten repeated experiments.The tables show that TAPT achieves the best accuracy compared to other learning schemes.In addition, Figures 3 and 4 compare the accuracy of the learning schemes on the two backbone networks for the BirdVox-14SD dataset and the ANAFCC dataset, respectively.From the figures, it can be seen that all learning schemes show better accuracy on VGG11 in most cases due to the depth and number of parameters.

Sensitivity Analysis and Ablation Study
In this experiment, we performed a sensitivity analysis of TAPT to evaluate the effect of the number of inner-loop updates on ASC performance, and the results are shown in Table 5 and Figure 5.According to the table, TAPT is less affected by the number of inner loops in ASC accuracy compared to MAML.Also, the lowest accuracy of TAPT is better than that of all existing few-shot ASC schemes using five inner-loop updates, as shown in Table 3.This indicates that the proposed scheme is more robust to the hyperparameters compared to other learning schemes.

Sensitivity Analysis and Ablation Study
In this experiment, we performed a sensitivity analysis of TAPT to evaluate the effect of the number of inner-loop updates on ASC performance, and the results are shown in Table 5 and Figure 5.According to the table, TAPT is less affected by the number of inner loops in ASC accuracy compared to MAML.Also, the lowest accuracy of TAPT is better than that of all existing few-shot ASC schemes using five inner-loop updates, as shown in Table 3.This indicates that the proposed scheme is more robust to the hyperparameters compared to other learning schemes.Furthermore, in the tables, few-shot learning schemes such as MAML and TL showed better ASC performance than CNN because they were trained with S task in addition to T target .
Among the few-shot learning schemes, MAML outperformed TL.One advantage of MAML over TL in few-shot ASC is generalization.As TL lacks the generalization process of knowledge during pretraining of the source task, its adaptability to new tasks in a few-shot setting is limited.In contrast, MAML updates the parameters of DeepASC in a way that generalizes the knowledge of the source task, resulting in better ASC accuracy in all cases.Despite MAML's superior performance, ProtoNet showed better classification performance than MAML in most cases.We found several advantages of ProtoNet compared to MAML in the context of few-shot ASC.First, ProtoNet computes the prototype of each class in a feature space, allowing for a more efficient representation of each class and making it easier to classify new classes based on their class prototypes.MAML, while powerful, learns an initialization that can quickly adapt to new tasks.In some cases, the absence of explicit prototypes might lead to less efficient representations, especially in limited labeled data.This simplicity and efficiency make ProtoNet SOTA in few-shot ASC and it is used in various challenges like few-shot animal sound event detection (DCASE).However, we found that ProtoNet might not capture fine-grained task-specific knowledge because ProtoNet's nearest-neighbor classification is based on class prototypes.
To address this, we propose a novel task-adaptive MAML that transforms parameter initialization according to task-specific knowledge for more accurate adaptation to new tasks.This fine-grained task-specific learning improves classification accuracy, especially for classes with subtle differences.To sum up, by customizing parameter initialization according to task-specific knowledge, TAPT addresses key limitations of MAML and ProtoNet, offering enhanced adaptability, fine-grained task-specific learning, efficient knowledge transfer, improved generalization, and balancing between complexity and flexibility.Due to these properties, quantitative results demonstrate that TAPT significantly outperforms comparison schemes in all settings.
Overall, TAPT performed the best in all cases on both datasets, achieving relative gains of up to 20.32% and 8.69% on the BirdVox-14SD and ANAFCC datasets, respectively, when compared to the second-best scheme.In summary, the original MAML showed serious limitations in ASC, and TAPT was able to achieve outstanding ASC performance by transforming the initial parameters of DeepASC more adaptively than MAML.

Sensitivity Analysis and Ablation Study
In this experiment, we performed a sensitivity analysis of TAPT to evaluate the effect of the number of inner-loop updates on ASC performance, and the results are shown in Table 5 and Figure 5.According to the table, TAPT is less affected by the number of inner loops in ASC accuracy compared to MAML.Also, the lowest accuracy of TAPT is better than that of all existing few-shot ASC schemes using five inner-loop updates, as shown in Table 3.This indicates that the proposed scheme is more robust to the hyperparameters compared to other learning schemes.Next, as an ablation study, we compare the ASC performance of MAML and TAPT according to the number of inner-loop updates.The comparison between MAML and TAPT in Tables 3 and 4 might be unfair, because TAPT adjusts its parameters once more for initialization before the inner-loop update.However, even TAPT with one inner-loop update provides 46.08% better ASC performance than the original MAML with five innerloop updates, as shown in Table 5.This is attributed to the task-adaptive transformation of TAPT that allows DeepASC to quickly adapt to unseen tasks.

Comparison of Training Accuracy
In the last experiment, we compare the training accuracy of MAML and TAPT. Figure 6a,b present their 5-way 5-shot training accuracy and loss, respectively, according to the number of inner-loop update steps contained in the training epoch for the ANAFCC dataset when using 4conv as a backbone.Figure 6 shows that compared to MAML, TAPTʹs training accuracy and training loss quickly converge to 1 and 0, respectively.This means Next, as an ablation study, we compare the ASC performance of MAML and TAPT according to the number of inner-loop updates.The comparison between MAML and TAPT in Tables 3 and 4 might be unfair, because TAPT adjusts its parameters once more for initialization before the inner-loop update.However, even TAPT with one inner-loop update provides 46.08% better ASC performance than the original MAML with five innerloop updates, as shown in Table 5.This is attributed to the task-adaptive transformation of TAPT that allows DeepASC to quickly adapt to unseen tasks.

Comparison of Training Accuracy
In the last experiment, we compare the training accuracy of MAML and TAPT. Figure 6a,b present their 5-way 5-shot training accuracy and loss, respectively, according to the number of inner-loop update steps contained in the training epoch for the ANAFCC dataset when using 4conv as a backbone.Figure 6 shows that compared to MAML, TAPT's training accuracy and training loss quickly converge to 1 and 0, respectively.This means that the parameter transformation of TAPT enables DeepASC to quickly adapt to unseen animal sound data.As a result, the epochs and training time required to train TAPT are much less compared to MAML.

Conclusions
In this paper, we proposed TAPT, a novel MAML-based task-adaptive parameter transformation scheme that can alleviate the data shortage problem in ASC.TAPT generates transformation variables while generalizing common knowledge and uses them to adjust the parameters of each specific classification task.To evaluate the effectiveness of the proposed scheme, we conducted extensive experiments for five different learning schemes using two public datasets and two backbone networks.In the experimental results, TAPT showed better performance than other few-shot ASC schemes, achieving up to 20.32% improvement in ASC accuracy compared to the SOTA scheme.In addition, a sensitivity analysis confirmed that TAPT is robust to the number of inner-loop updates, and an ablation study proved that the ASC accuracy improvement in TAPT results from the proposed parameter transformation.Finally, training accuracy comparison demonstrated that TAPT learns to classify unseen tasks more efficiently than MAML.
In the future, based on our few-shot learning scheme, we plan to make software that can be mounted on the audio sensor equipment for animal species classification.In addition, we will consider a hybrid MAML scheme that can incorporate features from the spectrogram as well as the waveform.

Conclusions
In this paper, we proposed TAPT, a novel MAML-based task-adaptive parameter transformation scheme that can alleviate the data shortage problem in ASC.TAPT generates transformation variables while generalizing common knowledge and uses them to adjust the parameters of each specific classification task.To evaluate the effectiveness of the proposed scheme, we conducted extensive experiments for five different learning schemes using two public datasets and two backbone networks.In the experimental results, TAPT showed better performance than other few-shot ASC schemes, achieving up to 20.32% improvement in ASC accuracy compared to the SOTA scheme.In addition, a sensitivity analysis confirmed that TAPT is robust to the number of inner-loop updates, and an ablation study proved that the ASC accuracy improvement in TAPT results from the proposed parameter transformation.Finally, training accuracy comparison demonstrated that TAPT learns to classify unseen tasks more efficiently than MAML.
In the future, based on our few-shot learning scheme, we plan to make software that can be mounted on the audio sensor equipment for animal species classification.In addition, we will consider a hybrid MAML scheme that can incorporate features from the spectrogram as well as the waveform.

Figure 3 .
Figure 3. Accuracy comparison of the ASC schemes on the BirdVox-14SD dataset in (a) 1-shot and (b) 5-shot settings.

Figure 4 .
Figure 4. Accuracy comparison of the ASC schemes on the ANAFCC dataset in (a) 1-shot and (b) 5-shot settings.

Figure 6 .
Figure 6.Comparison of training processes between TAPT and MAML according to training steps.(a) Training accuracy; (b) training loss.

Figure 6 .
Figure 6.Comparison of training processes between TAPT and MAML according to training steps.(a) Training accuracy; (b) training loss.

Table 1 .
Number of classes and samples in meta-training and meta-test for each dataset.

Table 2 .
Classes in the datasets and their corresponding animal species.

Table 3 .
Accuracy comparison of 5-way ASC on the BirdVox-14SD dataset (Bold and underlined values indicate the best and second-best accuracies, respectively).

Table 5 .
Accuracy of 5-way 5-shot ASC with number of inner-loop update steps on the BirdVox-14SD dataset.

Table 5 .
Accuracy of 5-way 5-shot ASC with number of inner-loop update steps on the BirdVox-14SD dataset.