Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = birdsong recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2225 KiB  
Article
A Bird Vocalization Classification Method Based on Bidirectional FBank with Enhanced Robustness
by Chizhou Peng, Yan Zhang, Jing Lu, Danjv Lv and Yanjiao Xiong
Appl. Sci. 2025, 15(9), 4913; https://doi.org/10.3390/app15094913 - 28 Apr 2025
Viewed by 412
Abstract
Recent advances in audio signal processing and pattern recognition have made the classification of bird vocalization a focus of bioacoustic research. However, the accurate classification of birdsongs is challenged by environmental noise and the limitations of traditional feature extraction methods. This study proposes [...] Read more.
Recent advances in audio signal processing and pattern recognition have made the classification of bird vocalization a focus of bioacoustic research. However, the accurate classification of birdsongs is challenged by environmental noise and the limitations of traditional feature extraction methods. This study proposes the iWAVE-BiFBank method, an innovative approach combining improved wavelet adaptive denoising (iWAVE) and a bidirectional Mel-filter bank (BiFBank) for effective birdsong classification with enhanced robustness. The iWAVE method achieves adaptive optimization using the autocorrelation coefficient and peak-sum-ratio (PSR), overcoming the manual adjustments required with and incompleteness of traditional methods. BiFBank combines FBank and inverse FBank (iFBank) to enhance feature representation. This fusion addresses the shortcomings of FBank and introduces novel transformation methods and filter designs to iFBank, with a focus on high-frequency components. The iWAVE-BiFBank method creates a robust feature set, which can effectively reduce the noise of audio signals and capture both low- and high-frequency information. Experiments were conducted on a dataset of 16 species of birds, and the proposed method was verified with a random forest (RF) classifier. The results show that iWAVE-BiFBank achieves an accuracy of 94.00%, with other indicators, including the F1 score, exceeding 93.00%, outperforming all other tested methods. Overall, the proposed method effectively reduces audio noise, comprehensively captures the characteristics of bird vocalization, and provides improved classification performance. Full article
Show Figures

Figure 1

17 pages, 4453 KiB  
Article
A Multi-Scale Feature Fusion Hybrid Convolution Attention Model for Birdsong Recognition
by Lianglian Gu, Guangzhi Di, Danju Lv, Yan Zhang, Yueyun Yu, Wei Li and Ziqian Wang
Appl. Sci. 2025, 15(8), 4595; https://doi.org/10.3390/app15084595 - 21 Apr 2025
Cited by 1 | Viewed by 644
Abstract
Birdsong is a valuable indicator of rich biodiversity and ecological significance. Although feature extraction has demonstrated satisfactory performance in classification, single-scale feature extraction methods may not fully capture the complexity of birdsong, potentially leading to suboptimal classification outcomes. The integration of multi-scale feature [...] Read more.
Birdsong is a valuable indicator of rich biodiversity and ecological significance. Although feature extraction has demonstrated satisfactory performance in classification, single-scale feature extraction methods may not fully capture the complexity of birdsong, potentially leading to suboptimal classification outcomes. The integration of multi-scale feature extraction and fusion enables the model to better handle scale variations, thereby enhancing its adaptability across different scales. To address this issue, we propose a multi-scale hybrid convolutional attention mechanism model (MUSCA). This method combines depthwise separable convolution and traditional convolution for feature extraction and incorporates self-attention and spatial attention mechanisms to refine spatial and channel features, thereby improving the effectiveness of multi-scale feature extraction. To further enhance multi-scale feature fusion, a layer-by-layer alignment feature fusion method is developed to establish a deeper correlation, thereby improving classification accuracy and robustness. Using the above method, we identified 20 bird species on three spectrograms, wavelet spectrogram, log-Mel spectrogram and log-spectrogram, with recognition rates of 93.79%, 96.97% and 95.44%, respectively. Compared with the resnet18 model, it increased by 3.26%, 1.88% and 3.09%, respectively. The results indicate that the MUSCA method proposed in this paper is competitive compared to recent and state-of-the-art methods. Full article
Show Figures

Figure 1

21 pages, 12814 KiB  
Article
Multi-Scale Deep Feature Fusion with Machine Learning Classifier for Birdsong Classification
by Wei Li, Danju Lv, Yueyun Yu, Yan Zhang, Lianglian Gu, Ziqian Wang and Zhicheng Zhu
Appl. Sci. 2025, 15(4), 1885; https://doi.org/10.3390/app15041885 - 12 Feb 2025
Cited by 1 | Viewed by 1305
Abstract
Birds are significant bioindicators in the assessment of habitat biodiversity, ecological impacts and ecosystem health. Against the backdrop of easier bird vocalization data acquisition, and with deep learning and machine learning technologies as the technical support, exploring recognition and classification networks suitable for [...] Read more.
Birds are significant bioindicators in the assessment of habitat biodiversity, ecological impacts and ecosystem health. Against the backdrop of easier bird vocalization data acquisition, and with deep learning and machine learning technologies as the technical support, exploring recognition and classification networks suitable for bird calls has become the focus of bioacoustics research. Due to the fact that the spectral differences among various bird calls are much greater than the differences between human languages, constructing birdsong classification networks based on human speech recognition networks does not yield satisfactory results. Effectively capturing the differences in birdsong across species is a crucial factor in improving recognition accuracy. To address the differences in features, this study proposes multi-scale deep features. At the same time, we separate the classification part from the deep network by using machine learning to adapt to classification with distinct feature differences in birdsong. We validate the effectiveness of multi-scale deep features on a publicly available dataset of 20 bird species. The experimental results show that the accuracy of the multi-scale deep features on a log-wavelet spectrum, log-Mel spectrum and log-power spectrum reaches 94.04%, 97.81% and 95.89%, respectively, achieving an improvement over single-scale deep features on these three spectrograms. Comparative experimental results show that the proposed multi-scale deep feature method is superior to five state-of-the-art birdsong identification methods, which provides new perspectives and tools for birdsong identification research, and is of great significance for ecological monitoring, biodiversity conservation and forest research. Full article
Show Figures

Figure 1

16 pages, 3821 KiB  
Article
Improved Broad Learning System for Birdsong Recognition
by Jing Lu, Yan Zhang, Danjv Lv, Shanshan Xie, Yixing Fu, Dan Lv, Youjie Zhao and Zhun Li
Appl. Sci. 2023, 13(19), 11009; https://doi.org/10.3390/app131911009 - 6 Oct 2023
Cited by 5 | Viewed by 1617
Abstract
Birds play a vital and indispensable role in biodiversity and environmental conservation. Protecting bird diversity is crucial for maintaining the balance of nature, promoting ecosystem health, and ensuring sustainable development. The Broad Learning System (BLS) exhibits an excellent ability to extract highly discriminative [...] Read more.
Birds play a vital and indispensable role in biodiversity and environmental conservation. Protecting bird diversity is crucial for maintaining the balance of nature, promoting ecosystem health, and ensuring sustainable development. The Broad Learning System (BLS) exhibits an excellent ability to extract highly discriminative features from raw inputs and construct complex feature representations by combining feature nodes and enhancement nodes, thereby enabling effective recognition and classification of various birdsongs. However, within the BLS, the selection of feature nodes and enhancement nodes assumes critical significance, yet the model lacks the capability to identify high quality network nodes. To address this issue, this paper proposes a novel method that introduces residual blocks and Mutual Similarity Criterion (MSC) layers into BLS to form an improved BLS (RMSC-BLS), which makes it easier for BLS to automatically select optimal features related to output. Experimental results demonstrate the accuracy of the RMSC-BLS model for the three construction features of MFCC, dMFCC, and dsquence is 78.85%, 79.29%, and 92.37%, respectively, which is 4.08%, 4.50%, and 2.38% higher than that of original BLS model. In addition, compared with other models, our RMSC-BLS model shows superior recognition performance, has higher stability and better generalization ability, and provides an effective solution for birdsong recognition. Full article
Show Figures

Figure 1

23 pages, 1583 KiB  
Article
Religion, Animals, and Contemplation
by Louis Komjathy
Religions 2022, 13(5), 457; https://doi.org/10.3390/rel13050457 - 18 May 2022
Cited by 3 | Viewed by 4095
Abstract
Animals teach each other. For humans open to trans-species and inter-species dialogue and interaction, animal-others offer important insights into, invocations of and models for diverse and alternative modes of perceiving, experiencing, relating, and being. They in turn challenge anthropocentric conceptions of consciousness and [...] Read more.
Animals teach each other. For humans open to trans-species and inter-species dialogue and interaction, animal-others offer important insights into, invocations of and models for diverse and alternative modes of perceiving, experiencing, relating, and being. They in turn challenge anthropocentric conceptions of consciousness and offer glimpses of and perhaps inspiration for increased awareness and presence. Might the current academic vogue of “equity, diversity, and inclusion” (EDI; or whichever order you prefer) even extend to “non-human” animals? Might this also represent one essential key to the human aspiration for freedom, wellness, and justice? The present article explores the topic of “religion and animals” through the complementary dimension of “contemplation”. Developing a fusion of Animal Studies, Contemplative Studies, Daoist Studies, and Religious Studies, I explore the topic with particular consideration of the indigenous Chinese religion of Daoism with a comparative and cross-cultural sensibility. I draw specific attention to the varieties of Daoist animal engagement, including animal companionship and becoming/being animal. Theologically speaking, this involves recognition of the reality of the Dao (sacred) manifesting through each and every being, and the possibility of inter/trans-species communication, relationality, and even identification. In the process, I suggest that “animal contemplation”, a form of contemplative practice and contemplative experience that places “the animal question” at the center and explores the possibility (actuality) of “shared animality”, not only offers important opportunities for becoming fully human (animal), but also represents one viable contribution to resolving impending (ongoing) ecological collapse, or at least the all-too-real possibility of a world without butterflies, bees, and birdsong. Full article
(This article belongs to the Special Issue Religion, Animals, and X)
Show Figures

Figure 1

25 pages, 4384 KiB  
Article
An FPGA-Based WASN for Remote Real-Time Monitoring of Endangered Species: A Case Study on the Birdsong Recognition of Botaurus stellaris
by Marcos Hervás, Rosa Ma Alsina-Pagès, Francesc Alías and Martí Salvador
Sensors 2017, 17(6), 1331; https://doi.org/10.3390/s17061331 - 8 Jun 2017
Cited by 14 | Viewed by 6771
Abstract
Fast environmental variations due to climate change can cause mass decline or even extinctions of species, having a dramatic impact on the future of biodiversity. During the last decade, different approaches have been proposed to track and monitor endangered species, generally based on [...] Read more.
Fast environmental variations due to climate change can cause mass decline or even extinctions of species, having a dramatic impact on the future of biodiversity. During the last decade, different approaches have been proposed to track and monitor endangered species, generally based on costly semi-automatic systems that require human supervision adding limitations in coverage and time. However, the recent emergence of Wireless Acoustic Sensor Networks (WASN) has allowed non-intrusive remote monitoring of endangered species in real time through the automatic identification of the sound they emit. In this work, an FPGA-based WASN centralized architecture is proposed and validated on a simulated operation environment. The feasibility of the architecture is evaluated in a case study designed to detect the threatened Botaurus stellaris among other 19 cohabiting birds species in The Parc Natural dels Aiguamolls de l’Empord Full article
Show Figures

Figure 1

10 pages, 9858 KiB  
Data Descriptor
Towards Automatic Bird Detection: An Annotated and Segmented Acoustic Dataset of Seven Picidae Species
by Ester Vidaña-Vila, Joan Navarro and Rosa Ma Alsina-Pagès
Data 2017, 2(2), 18; https://doi.org/10.3390/data2020018 - 16 May 2017
Cited by 9 | Viewed by 8643
Abstract
Analysing behavioural patterns of bird species in a certain region enables researchers to recognize forthcoming changes in environment, ecology, and population. Ornithologists spend many hours observing and recording birds in their natural habitat to compare different audio samples and extract valuable insights. This [...] Read more.
Analysing behavioural patterns of bird species in a certain region enables researchers to recognize forthcoming changes in environment, ecology, and population. Ornithologists spend many hours observing and recording birds in their natural habitat to compare different audio samples and extract valuable insights. This manual process is typically undertaken by highly-experienced birders that identify every species and its associated type of sound. In recent years, some public repositories hosting labelled acoustic samples from different bird species have emerged, which has resulted in appealing datasets that computer scientists can use to test the accuracy of their machine learning algorithms and assist ornithologists in the time-consuming process of analyzing audio data. Current limitations in the performance of these algorithms come from the fact that the acoustic samples of these datasets combine fragments with only environmental noise and fragments with the bird sound (i.e., the computer confuses environmental sound with the bird sound). Therefore, the purpose of this paper is to release a dataset lasting more than 4984 s that contains differentiated samples of (1) bird sounds and (2) environmental sounds. This data descriptor releases the processed audio samples—originally obtained from the Xeno-Canto repository—from the known seven families of the Picidae species inhabiting the Iberian Peninsula that are good indicators of the habitat quality and have significant value from the environment conservation point of view. Full article
Show Figures

Figure 1

Back to TopTop