Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = musical instrument classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4696 KiB  
Article
A Deep-Learning Framework with Multi-Feature Fusion and Attention Mechanism for Classification of Chinese Traditional Instruments
by Jinrong Yang, Fang Gao, Teng Yun, Tong Zhu, Huaixi Zhu, Ran Zhou and Yikun Wang
Electronics 2025, 14(14), 2805; https://doi.org/10.3390/electronics14142805 - 12 Jul 2025
Viewed by 339
Abstract
Chinese traditional instruments are diverse and encompass a rich variety of timbres and rhythms, presenting considerable research potential. This work proposed a deep-learning framework for the automated classification of Chinese traditional instruments, addressing the challenges of acoustic diversity and cultural preservation. By integrating [...] Read more.
Chinese traditional instruments are diverse and encompass a rich variety of timbres and rhythms, presenting considerable research potential. This work proposed a deep-learning framework for the automated classification of Chinese traditional instruments, addressing the challenges of acoustic diversity and cultural preservation. By integrating two datasets, CTIS and ChMusic, we constructed a combined dataset comprising four instrument families: wind, percussion, plucked string, and bowed string. Three time-frequency features, namely MFCC, CQT, and Chroma, were extracted to capture diverse sound information. A convolutional neural network architecture was designed, incorporating 3-channel spectrogram feature stacking and a hybrid channel–spatial attention mechanism to enhance the extraction of critical frequency bands and feature weights. Experimental results demonstrated that the feature-fusion method improved classification performance compared to a single feature as input. Meanwhile, the attention mechanism further boosted test accuracy to 98.79%, outperforming baseline models by 2.8% and achieving superior F1 scores and recall compared to classical architectures. Ablation study confirmed the contribution of attention mechanisms. This work validates the efficacy of deep learning in preserving intangible cultural heritage through precise analysis, offering a feasible methodology for the classification of Chinese traditional instruments. Full article
Show Figures

Figure 1

14 pages, 878 KiB  
Article
Multi-Instance Multi-Scale Graph Attention Neural Net with Label Semantic Embeddings for Instrument Recognition
by Na Bai, Zhaoli Wu and Jian Zhang
Signals 2025, 6(3), 30; https://doi.org/10.3390/signals6030030 - 24 Jun 2025
Viewed by 299
Abstract
Instrument recognition is a crucial aspect of music information retrieval, and in recent years, machine learning-based methods have become the primary approach to addressing this challenge. However, existing models often struggle to accurately identify multiple instruments within music tracks that vary in length [...] Read more.
Instrument recognition is a crucial aspect of music information retrieval, and in recent years, machine learning-based methods have become the primary approach to addressing this challenge. However, existing models often struggle to accurately identify multiple instruments within music tracks that vary in length and quality. One key issue is that the instruments of interest may not appear in every clip of the audio sample, and when they do, they are often unevenly distributed across different sections of the track. Additionally, in polyphonic music, multiple instruments are often played simultaneously, leading to signal overlap. Using the same overlapping audio signals as partial classification features for different instruments will reduce the distinguishability of features between instruments, thereby affecting the performance of instrument recognition. These complexities present significant challenges for current instrument recognition models. Therefore, this paper proposes a multi-instance multi-scale graph attention neural network (MMGAT) with label semantic embeddings for instrument recognition. MMGAT designs an instance correlation graph to model the presence and quantitative timbre similarity of instruments at different positions from the perspective of multi-instance learning. Then, to enhance the distinguishability of signals after the overlap of different instruments and improve classification accuracy, MMGAT learns semantic information from the labels of different instruments as embeddings and incorporates them into the overlapping audio signal features, thereby enhancing the differentiability of audio features for various instruments. MMGAT then designs an instance-based multi-instance multi-scale graph attention neural network to recognize different instruments based on the instance correlation graphs and label semantic embeddings. The effectiveness of MMGAT is validated through experiments and compared to commonly used instrument recognition models. The experimental results demonstrate that MMGAT outperforms existing approaches in instrument recognition tasks. Full article
Show Figures

Figure 1

16 pages, 10466 KiB  
Article
Hierarchical Residual Attention Network for Musical Instrument Recognition Using Scaled Multi-Spectrogram
by Rujia Chen, Akbar Ghobakhlou and Ajit Narayanan
Appl. Sci. 2024, 14(23), 10837; https://doi.org/10.3390/app142310837 - 22 Nov 2024
Cited by 2 | Viewed by 1375
Abstract
Musical instrument recognition is a relatively unexplored area of machine learning due to the need to analyze complex spatial–temporal audio features. Traditional methods using individual spectrograms, like STFT, Log-Mel, and MFCC, often miss the full range of features. Here, we propose a hierarchical [...] Read more.
Musical instrument recognition is a relatively unexplored area of machine learning due to the need to analyze complex spatial–temporal audio features. Traditional methods using individual spectrograms, like STFT, Log-Mel, and MFCC, often miss the full range of features. Here, we propose a hierarchical residual attention network using a scaled combination of multiple spectrograms, including STFT, Log-Mel, MFCC, and CST features (Chroma, Spectral contrast, and Tonnetz), to create a comprehensive sound representation. This model enhances the focus on relevant spectrogram parts through attention mechanisms. Experimental results with the OpenMIC-2018 dataset show significant improvement in classification accuracy, especially with the “Magnified 1/4 Size” configuration. Future work will optimize CST feature scaling, explore advanced attention mechanisms, and apply the model to other audio tasks to assess its generalizability. Full article
(This article belongs to the Special Issue AI in Audio Analysis: Spectrogram-Based Recognition)
Show Figures

Figure 1

15 pages, 386 KiB  
Article
Detecting Selected Instruments in the Sound Signal
by Daniel Kostrzewa, Paweł Szwajnoch, Robert Brzeski and Dariusz Mrozek
Appl. Sci. 2024, 14(14), 6330; https://doi.org/10.3390/app14146330 - 20 Jul 2024
Viewed by 1663
Abstract
Detecting instruments in a music signal is often used in database indexing, song annotation, and creating applications for musicians and music producers. Therefore, effective methods that automatically solve this issue need to be created. In this paper, the mentioned task is solved using [...] Read more.
Detecting instruments in a music signal is often used in database indexing, song annotation, and creating applications for musicians and music producers. Therefore, effective methods that automatically solve this issue need to be created. In this paper, the mentioned task is solved using mel-frequency cepstral coefficients (MFCC) and various architectures of artificial neural networks. The authors’ contribution to the development of automatic instrument detection covers the methods used, particularly the neural network architectures and the voting committees created. All these methods were evaluated, and the results are presented and discussed in the paper. The proposed automatic instrument detection methods show that the best classification quality was obtained for an extensive model, which is the so-called committee of voting classifiers. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

15 pages, 4997 KiB  
Article
Comparative Study of Musical Timbral Variations: Crescendo and Vibrato Using FFT-Acoustic Descriptor
by Yubiry Gonzalez and Ronaldo C. Prati
Eng 2023, 4(3), 2468-2482; https://doi.org/10.3390/eng4030140 - 21 Sep 2023
Cited by 1 | Viewed by 1630
Abstract
A quantitative evaluation of the musical timbre and its variations is important for the analysis of audio recordings and computer-aided music composition. Using the FFT acoustic descriptors and their representation in an abstract timbral space, variations in a sample of monophonic sounds of [...] Read more.
A quantitative evaluation of the musical timbre and its variations is important for the analysis of audio recordings and computer-aided music composition. Using the FFT acoustic descriptors and their representation in an abstract timbral space, variations in a sample of monophonic sounds of chordophones (violin, cello) and aerophones (trumpet, transverse flute, and clarinet) sounds are analyzed. It is concluded that the FFT acoustic descriptors allow us to distinguish the timbral variations in the musical dynamics, including crescendo and vibrato. Furthermore, using the Random Forest algorithm, it is shown that the FFT-Acoustic provides a statistically significant classification to distinguish musical instruments, families of instruments, and dynamics. We observed an improvement in the FFT-Acoustic descriptors when classifying pitch compared to some timbral features of Librosa. Full article
(This article belongs to the Special Issue Feature Papers in Eng 2023)
Show Figures

Figure 1

97 pages, 144318 KiB  
Article
A Review on Acoustics of Wood as a Tool for Quality Assessment
by Voichita Bucur
Forests 2023, 14(8), 1545; https://doi.org/10.3390/f14081545 - 28 Jul 2023
Cited by 20 | Viewed by 4944
Abstract
Acoustics is a field with significant application in wood science and technology for the classification and grading, through non-destructive tests, of a large variety of products from standing trees to building structural elements and musical instruments. In this review article the following aspects [...] Read more.
Acoustics is a field with significant application in wood science and technology for the classification and grading, through non-destructive tests, of a large variety of products from standing trees to building structural elements and musical instruments. In this review article the following aspects are treated: (1) The theoretical background related to acoustical characterization of wood as an orthotropic material. We refer to the wave propagation in anisotropic media, to the wood anatomic structure and propagation phenomena, to the velocity of ultrasonic waves and the elastic constants of an orthotropic solid. The acoustic methods for the determination of the elastic constants of wood range from the low frequency domain to the ultrasonic domain using direct contact techniques or ultrasonic spectroscopy. (2) The acoustic and ultrasonic methods for quality assessment of trees, logs, lumber and structural timber products. Scattering-based techniques and ultrasonic tomography are used for quality assessment of standing trees and green logs. The methods are based on scanning stress waves using dry-point-contact ultrasound or air-coupled ultrasound and are discussed for quality assessment of structural composite timber products and for delamination detection in wood-based composite boards. (3) The high-power ultrasound as a field with important potential for industrial applications such as wood drying and other applications. (4) The methods for the characterization of acoustical properties of the wood species used for musical instrument manufacturing, wood anisotropy, the quality of wood for musical instruments and the factors of influence related to the environmental conditions, the natural aging of wood and the effects of long-term loading by static or dynamic regimes on wood properties. Today, the acoustics of wood is a branch of wood science with huge applications in industry. Full article
(This article belongs to the Special Issue Reviews on Structure and Physical and Mechanical Properties of Wood)
Show Figures

Figure 1

20 pages, 6486 KiB  
Article
Recognizing Similar Musical Instruments with YOLO Models
by Christine Dewi, Abbott Po Shun Chen and Henoch Juli Christanto
Big Data Cogn. Comput. 2023, 7(2), 94; https://doi.org/10.3390/bdcc7020094 - 10 May 2023
Cited by 15 | Viewed by 4715
Abstract
Researchers in the fields of machine learning and artificial intelligence have recently begun to focus their attention on object recognition. One of the biggest obstacles in image recognition through computer vision is the detection and identification of similar items. Identifying similar musical instruments [...] Read more.
Researchers in the fields of machine learning and artificial intelligence have recently begun to focus their attention on object recognition. One of the biggest obstacles in image recognition through computer vision is the detection and identification of similar items. Identifying similar musical instruments can be approached as a classification problem, where the goal is to train a machine learning model to classify instruments based on their features and shape. Cellos, clarinets, erhus, guitars, saxophones, trumpets, French horns, harps, recorders, bassoons, and violins were all classified in this investigation. There are many different musical instruments that have the same size, shape, and sound. In addition, we were amazed by the simplicity with which humans can identify items that are very similar to one another, but this is a challenging task for computers. For this study, we used YOLOv7 to identify pairs of musical instruments that are most like one another. Next, we compared and evaluated the results from YOLOv7 with those from YOLOv5. Furthermore, the results of our tests allowed us to enhance the performance in terms of detecting similar musical instruments. Moreover, with an average accuracy of 86.7%, YOLOv7 outperformed previous approaches and other research results. Full article
(This article belongs to the Special Issue Computational Collective Intelligence with Big Data–AI Society)
Show Figures

Figure 1

6 pages, 805 KiB  
Proceeding Paper
Kurdish Music Genre Recognition Using a CNN and DNN
by Aza Kamala and Hossein Hassani
Eng. Proc. 2023, 31(1), 64; https://doi.org/10.3390/ASEC2022-13803 - 2 Dec 2022
Cited by 4 | Viewed by 2291
Abstract
Music has different styles, and they are categorized into genres by musicologists. Nonetheless, non-musicologists categorize music differently, such as by finding similarities and patterns in instruments, harmony, and the style of the music. For instance, in addition to popular music genre categorization, such [...] Read more.
Music has different styles, and they are categorized into genres by musicologists. Nonetheless, non-musicologists categorize music differently, such as by finding similarities and patterns in instruments, harmony, and the style of the music. For instance, in addition to popular music genre categorization, such as classic, pop, and modern folkloric, Kurdish music is categorized by Kurdish music lovers according to the type of dance that could go with a particular piece of music. Due to technological advancements, technologies such as artificial intelligence (AI) can help in music genre recognition. Using AI to recognize music genres has been a growing field lately. Computational musicology uses AI in various sectors of studying music. However, the literature shows no evidence of addressing any computational musicology research focusing on Kurdish music. In particular, we have not been able to find any work that indicates the usage of AI in the classification of Kurdish music genres. In this research, we compiled a dataset that comprises 880 samples of 8 Kurdish music genres. We used two machine learning models in our experiments: a convolutional neural network (CNN) and a deep neural network (DNN). According to the evaluations, the CNN model achieved 92% accuracy, while the DNN achieved 90% accuracy. Therefore, we developed an application that uses the CNN model to identify Kurdish music genres by uploading or listening to Kurdish music. Full article
(This article belongs to the Proceedings of The 3rd International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

12 pages, 10060 KiB  
Article
Temari Balls, Spheres, SphereHarmonic: From Japanese Folkcraft to Music
by Maria Mannone and Takashi Yoshino
Algorithms 2022, 15(8), 286; https://doi.org/10.3390/a15080286 - 14 Aug 2022
Viewed by 4197
Abstract
Temari balls are traditional Japanese toys and artworks. The variety of their geometries and tessellations can be investigated formally and computationally with the means of combinatorics. As a further step, we also propose a musical application of the core idea of Temari balls. [...] Read more.
Temari balls are traditional Japanese toys and artworks. The variety of their geometries and tessellations can be investigated formally and computationally with the means of combinatorics. As a further step, we also propose a musical application of the core idea of Temari balls. In fact, inspired by the classical idea of music of spheres and by the CubeHarmonic, a musical application of the Rubik’s cube, we present the concept of a new musical instrument, the SphereHarmonic. The mathematical (and musical) description of Temari balls lies in the wide background of interactions between art and combinatorics. Concerning the methods, we present the tools of permutations and tessellations we adopted here, and the core idea for the SphereHarmonic. As the results, we first describe a classification of structures according to the theory of groups. Then, we summarize the main passages implemented in our code, to make the SphereHarmonic play on a laptop. Our study explores an aspect of the deep connections between the mutually inspiring scientific and artistic thinking. Full article
(This article belongs to the Special Issue Combinatorial Designs: Theory and Applications)
Show Figures

Figure 1

24 pages, 7968 KiB  
Article
Acoustic Descriptors for Characterization of Musical Timbre Using the Fast Fourier Transform
by Yubiry Gonzalez and Ronaldo C. Prati
Electronics 2022, 11(9), 1405; https://doi.org/10.3390/electronics11091405 - 27 Apr 2022
Cited by 11 | Viewed by 4768
Abstract
The quantitative assessment of the musical timbre in an audio record is still an open-ended issue. Evaluating the musical timbre allows not only to establish precise musical parameters but also the recognition, classification of musical instruments, and assessment of the musical quality of [...] Read more.
The quantitative assessment of the musical timbre in an audio record is still an open-ended issue. Evaluating the musical timbre allows not only to establish precise musical parameters but also the recognition, classification of musical instruments, and assessment of the musical quality of a sound record. In this paper, we present a minimum set of dimensionless descriptors, motivated by musical acoustics, using the spectra obtained by the Fast Fourier Transform (FFT), which allows describing the timbre of wooden aerophones (Bassoon, Clarinet, Transverse Flute, and Oboe) using individual sound recordings of the musical tempered scale. We postulate that the proposed descriptors are sufficient to describe the timbral characteristics in the aerophones studied, allowing their recognition using the acoustic spectral signature. We believe that this approach can be further extended to use multidimensional unsupervised machine learning techniques, such as clustering, to obtain new insights into timbre characterization. Full article
(This article belongs to the Special Issue Applications of Audio and Acoustic Signal)
Show Figures

Figure 1

22 pages, 31246 KiB  
Article
Automatic Evaluation of Piano Performances for STEAM Education
by Varinya Phanichraksaphong and Wei-Ho Tsai
Appl. Sci. 2021, 11(24), 11783; https://doi.org/10.3390/app112411783 - 11 Dec 2021
Cited by 22 | Viewed by 4581
Abstract
Music plays an important part in the lives of people from an early age. Many parents invest in music education of various types for their children as arts and music are of economic importance. This leads to a new trend that the STEAM [...] Read more.
Music plays an important part in the lives of people from an early age. Many parents invest in music education of various types for their children as arts and music are of economic importance. This leads to a new trend that the STEAM education system draws more and more attention from the STEM education system that has been developed over several years. For example, parents let their children listen to music since they were in the womb and invest their money in studying music at an early age, especially for playing and learning musical instruments. As far as education is concerned, assessment for music performances should be standardized, not based on the individual teacher’s standard. Thus, in this study, automatic assessment methods for piano performances were developed. Two types of piano articulation were taken into account, namely “Legato” with vibration notes using sustain pedals and “Staccato” with detached notes without the use of sustain pedals. For each type, piano sounds were analyzed and classified into “Good”, “Normal”, and “Bad” categories. The study investigated four approaches for this task: Support Vector Machine (SVM), Naive Bayes (NB), Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM). The experiments were conducted using 4680 test samples, including isolated scale notes and kids’ songs, produced by 13 performers. The results show that the CNN approach is superior to the other approaches, with a classification accuracy of more than eighty percent. Full article
(This article belongs to the Special Issue Advances in Computer Music)
Show Figures

Figure 1

23 pages, 18271 KiB  
Review
Audio-Tactile Rendering: A Review on Technology and Methods to Convey Musical Information through the Sense of Touch
by Byron Remache-Vinueza, Andrés Trujillo-León, Mireya Zapata, Fabián Sarmiento-Ortiz and Fernando Vidal-Verdú
Sensors 2021, 21(19), 6575; https://doi.org/10.3390/s21196575 - 30 Sep 2021
Cited by 50 | Viewed by 9523
Abstract
Tactile rendering has been implemented in digital musical instruments (DMIs) to offer the musician haptic feedback that enhances his/her music playing experience. Recently, this implementation has expanded to the development of sensory substitution systems known as haptic music players (HMPs) to give the [...] Read more.
Tactile rendering has been implemented in digital musical instruments (DMIs) to offer the musician haptic feedback that enhances his/her music playing experience. Recently, this implementation has expanded to the development of sensory substitution systems known as haptic music players (HMPs) to give the opportunity of experiencing music through touch to the hearing impaired. These devices may also be conceived as vibrotactile music players to enrich music listening activities. In this review, technology and methods to render musical information by means of vibrotactile stimuli are systematically studied. The methodology used to find out relevant literature is first outlined, and a preliminary classification of musical haptics is proposed. A comparison between different technologies and methods for vibrotactile rendering is performed to later organize the information according to the type of HMP. Limitations and advantages are highlighted to find out opportunities for future research. Likewise, methods for music audio-tactile rendering (ATR) are analyzed and, finally, strategies to compose for the sense of touch are summarized. This review is intended for researchers in the fields of haptics, assistive technologies, music, psychology, and human–computer interaction as well as artists that may make use of it as a reference to develop upcoming research on HMPs and ATR. Full article
(This article belongs to the Special Issue Tactile Sensing and Rendering for Healthcare Applications)
Show Figures

Figure 1

18 pages, 4891 KiB  
Article
Correlation between Anatomical Grading and Acoustic–Elastic Properties of Resonant Spruce Wood Used for Musical Instruments
by Florin Dinulică, Mariana Domnica Stanciu and Adriana Savin
Forests 2021, 12(8), 1122; https://doi.org/10.3390/f12081122 - 22 Aug 2021
Cited by 32 | Viewed by 4750
Abstract
This paper deals with the acoustic and elastic properties of resonant wood, classified into four classes, according to the classification of wood quality by the manufacturers of musical instruments. Traditionally, the quality grades of resonant wood are determined on the basis of the [...] Read more.
This paper deals with the acoustic and elastic properties of resonant wood, classified into four classes, according to the classification of wood quality by the manufacturers of musical instruments. Traditionally, the quality grades of resonant wood are determined on the basis of the visual inspections of the macroscopic characteristics of the wood (annual ring width, regularity, proportion of early and late wood, absence of defects, etc.). Therefore, in this research, we studied whether there are correlations between the acoustic and elastic properties and the anatomical characteristics of wood used for the construction of violins. The results regarding the identification of the anatomical properties of resonant spruce, the wood color, and the acoustic/elastic properties, determined by ultrasonic measurements, were statistically analyzed to highlight the connection between the determined properties. From the statistical analysis, it can be seen that the only variables with the power to separate the quality classes are (in descending order of importance) the speed of sound propagation in the radial direction, Poisson’s ratio in the longitudinal–radial direction, and the speed of propagation of sounds in the longitudinal direction. Full article
(This article belongs to the Special Issue Wood Production and Promotion)
Show Figures

Graphical abstract

17 pages, 2819 KiB  
Article
DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification
by Lvyang Qiu, Shuyu Li and Yunsick Sung
Mathematics 2021, 9(5), 530; https://doi.org/10.3390/math9050530 - 3 Mar 2021
Cited by 28 | Viewed by 3910
Abstract
Music is a type of time-series data. As the size of the data increases, it is a challenge to build robust music genre classification systems from massive amounts of music data. Robust systems require large amounts of labeled music data, which necessitates time- [...] Read more.
Music is a type of time-series data. As the size of the data increases, it is a challenge to build robust music genre classification systems from massive amounts of music data. Robust systems require large amounts of labeled music data, which necessitates time- and labor-intensive data-labeling efforts and expert knowledge. This paper proposes a musical instrument digital interface (MIDI) preprocessing method, Pitch to Vector (Pitch2vec), and a deep bidirectional transformers-based masked predictive encoder (MPE) method for music genre classification. The MIDI files are considered as input. MIDI files are converted to the vector sequence by Pitch2vec before being input into the MPE. By unsupervised learning, the MPE based on deep bidirectional transformers is designed to extract bidirectional representations automatically, which are musicological insight. In contrast to other deep-learning models, such as recurrent neural network (RNN)-based models, the MPE method enables parallelization over time-steps, leading to faster training. To evaluate the performance of the proposed method, experiments were conducted on the Lakh MIDI music dataset. During MPE training, approximately 400,000 MIDI segments were utilized for the MPE, for which the recovery accuracy rate reached 97%. In the music genre classification task, the accuracy rate and other indicators of the proposed method were more than 94%. The experimental results indicate that the proposed method improves classification performance compared with state-of-the-art models. Full article
(This article belongs to the Special Issue Data Mining for Temporal Data Analysis)
Show Figures

Figure 1

23 pages, 4532 KiB  
Project Report
The Design of Musical Instruments for Grey Parrots: An Artistic Contribution toward Auditory Enrichment in the Context of ACI
by Reinhard Gupfinger and Martin Kaltenbrunner
Multimodal Technol. Interact. 2020, 4(2), 16; https://doi.org/10.3390/mti4020016 - 3 May 2020
Cited by 4 | Viewed by 5227
Abstract
One particular approach in the context of Animal Computer Interaction (ACI) is auditory enrichment for captive wild animals. Here we describe our research and the methodology used to design musical instruments and interfaces aimed at providing auditory enrichment for grey parrots living in [...] Read more.
One particular approach in the context of Animal Computer Interaction (ACI) is auditory enrichment for captive wild animals. Here we describe our research and the methodology used to design musical instruments and interfaces aimed at providing auditory enrichment for grey parrots living in captivity. The paper is divided into three main phases: a project review and classification, sonic experiments at the parrot shelter and the design of musical instruments. The overview of recent projects that involve animals in the interaction and music-generation process highlights the costs and benefits of projects of this kind and provides insights into current technologies in this field and the musical talents of animals. Furthermore, we document a series of sonic experiments conducted at a parrot shelter to develop acoustically enriched environments through the use of musical instruments. These investigations were intended to provide a better understanding of how grey parrots communicate through sound, perceive and respond to auditory stimuli and possibly generate sound and music through the usage of technological devices. Based on the cognitive, physiological, and auditory abilities of grey parrots, and their intrinsic interest in sonic and physical interactions, we finally developed and tested various interactive instrument prototypes and here we present our design results for auditory enrichment in the context of ACI and artistic research. Full article
(This article belongs to the Special Issue Animal Centered Computing: Enriching the Lives of Animals)
Show Figures

Figure 1

Back to TopTop