Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Keywords = iDAFF

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 5546 KiB  
Article
DBS-NET: A Dual-Branch Network Integrating Supervised and Contrastive Self-Supervised Learning for Birdsong Classification
by Ziyi Wang, Hao Shi, Yan Zhang, Yong Cao and Danjv Lv
Appl. Sci. 2025, 15(10), 5418; https://doi.org/10.3390/app15105418 - 12 May 2025
Cited by 1 | Viewed by 443
Abstract
Birdsong classification plays a crucial role in monitoring species distribution, population structure, and environmental changes. Existing methods typically use supervised learning to extract specific features for classification, but this may limit the generalization ability of the model and lead to generalization errors. Unsupervised [...] Read more.
Birdsong classification plays a crucial role in monitoring species distribution, population structure, and environmental changes. Existing methods typically use supervised learning to extract specific features for classification, but this may limit the generalization ability of the model and lead to generalization errors. Unsupervised feature extraction methods are an emerging approach that offers enhanced adaptability, particularly for handling unlabeled and diverse birdsong data. However, their drawback may bring additional time cost to downstream tasks, which may impact overall efficiency. To address these challenges, we propose DBS-NET, a Dual-Branch Network Model for birdsong classification. DBS-NET consists of two branches: a supervised branch (Res-iDAFF) and an unsupervised branch (based on a contrastive learning approach). We introduce an iterative dual-attention feature fusion (iDAFF) module in the backbone to enhance contextual feature extraction, and a linear residual classifier is exploited further improve classification accuracy. Additionally, to address class imbalance in the dataset, a weighted loss function is introduced to adjust the cross-entropy loss with optimized class weights. To improve training efficiency, the backbone networks of both branches share a portion of their weights, reducing the computational overhead. In the experiments on a self-built 30-class dataset and the Birdsdata dataset, the proposed method achieved accuracies of 97.54% and 97.09%, respectively, outperforming other supervised and unsupervised birdsong classification methods. Full article
Show Figures

Figure 1

Back to TopTop