Animal–Computer Interaction: Advances and Opportunities

A special issue of Animals (ISSN 2076-2615). This special issue belongs to the section "Animal Welfare".

Deadline for manuscript submissions: closed (31 January 2025) | Viewed by 19510

Special Issue Editors


E-Mail Website
Guest Editor
Interaction Design Research Group, School of Computing and Digital Media, London Metropolitan University, London N7 8DB, UK
Interests: animal–computer interaction; game design; virtual and augmented reality; physical computing; interaction design; environmental enrichment; HCI

E-Mail Website
Guest Editor
Indianapolis Zoo, Indianapolis, IN 46222, USA
Interests: animal-computer interaction; animal cognition; animal communication; behaviour research methods; behavioral enrichment

Special Issue Information

Dear Colleagues,

Animal–computer interaction (ACI) is a multidisciplinary field concerned with the design of technology for, with and from the perspective of all species of animals. In some cases, interactions with technology are explicit and direct, requiring specially designed interfaces between system and user; in others, the technology may play a hidden role, monitoring and analysing an animal’s behaviour or changing their environment. The emphasis on animal welfare gives rise to philosophical considerations, as well as driving new research methods and technologies.

This Special Issue shares new ideas and developments in the field of ACI, including those that advance scientific knowledge about animals, enhance connections between species, improve the stewardship of animals in human care, and articulate the design of systems that offer greater autonomy to other species. We invite submissions including original reports, reviews, methods, perspectives or design fictions.  These may address, but are not limited to, the following themes:

  • Environmental enrichment in a variety of contexts, including zoos, farms, laboratories, shelters and domestic spaces;
  • Support for working animals, including service animals and those that replace human labour or work cooperatively with humans; 
  • Studies in animal perception and cognition; 
  • Technological methods for measuring and analysing behaviour;
  • The assessment and promotion of emotional health and wellbeing;
  • Techniques for interspecies communication;
  • Inclusive designs aimed at enriching wildlife in urban or rural contexts;
  • Techniques for supporting conservation;
  • Public education and conservation awareness;  
  • Training and autonomy.

Dr. Fiona French
Dr. Christopher Flynn Martin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Animals is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • animal computer interaction
  • animal
  • enrichment
  • interaction design
  • perception and cognition
  • behaviour analysis
  • technology for animals
  • automated methods
  • conservation
  • emotional health and wellbeing
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

18 pages, 2527 KiB  
Article
Performance Comparison of Genomic Best Linear Unbiased Prediction and Four Machine Learning Models for Estimating Genomic Breeding Values in Working Dogs
by Joseph A. Thorsrud, Katy M. Evans, Kyle C. Quigley, Krishnamoorthy Srikanth and Heather J. Huson
Animals 2025, 15(3), 408; https://doi.org/10.3390/ani15030408 - 2 Feb 2025
Viewed by 1022
Abstract
This study investigates the efficacy of various genomic prediction models—Genomic Best Linear Unbiased Prediction (GBLUP), Random Forest (RF), Support Vector Machine (SVM), Extreme Gradient Boosting (XGB), and Multilayer Perceptron (MLP)—in predicting genomic breeding values (gEBVs). The phenotypic data include three binary health traits [...] Read more.
This study investigates the efficacy of various genomic prediction models—Genomic Best Linear Unbiased Prediction (GBLUP), Random Forest (RF), Support Vector Machine (SVM), Extreme Gradient Boosting (XGB), and Multilayer Perceptron (MLP)—in predicting genomic breeding values (gEBVs). The phenotypic data include three binary health traits (anodontia, distichiasis, oral papillomatosis) and one behavioral trait (distraction) in a population of guide dogs. These traits impact the potential for success in guide dogs and are therefore routinely characterized but were chosen based on differences in heritability and case counts specifically to assess gEBV model performance. Utilizing a dataset from The Seeing Eye organization, which includes German Shepherds (n = 482), Golden Retrievers (n = 239), Labrador Retrievers (n = 1188), and Labrador and Golden Retriever crosses (n = 111), we assessed model performance within and across different breeds, trait heritability, case counts, and SNP marker densities. Our results indicate that no significant differences were found in model performance across varying heritabilities, case counts, or SNP densities, with all models performing similarly. Given its lack of need for parameter optimization, GBLUP was the most efficient model. Distichiasis showed the highest overall predictive performance, likely due to its higher heritability, while anodontia and distraction exhibited moderate accuracy, and oral papillomatosis had the lowest accuracy, correlating with its low heritability. These findings underscore that lower density SNP datasets can effectively construct gEBVs, suggesting that high-cost, high-density genotyping may not always be necessary. Additionally, the similar performance of all models indicates that simpler models like GBLUP, which requires less fine tuning, may be sufficient for genomic prediction in canine breeding programs. The research highlights the importance of standardized phenotypic assessments and carefully constructed reference populations to optimize the utility of genomic selection in canine breeding programs. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

16 pages, 3776 KiB  
Article
MDA-DETR: Enhancing Offending Animal Detection with Multi-Channel Attention and Multi-Scale Feature Aggregation
by Haiyan Zhang, Huiqi Li, Guodong Sun and Feng Yang
Animals 2025, 15(2), 259; https://doi.org/10.3390/ani15020259 - 17 Jan 2025
Viewed by 742
Abstract
Conflicts between humans and animals in agricultural and settlement areas have recently increased, resulting in significant resource loss and risks to human and animal lives. This growing issue presents a global challenge. This paper addresses the detection and identification of offending animals, particularly [...] Read more.
Conflicts between humans and animals in agricultural and settlement areas have recently increased, resulting in significant resource loss and risks to human and animal lives. This growing issue presents a global challenge. This paper addresses the detection and identification of offending animals, particularly in obscured or blurry nighttime images. This article introduces Multi-Channel Coordinated Attention and Multi-Dimension Feature Aggregation (MDA-DETR). It integrates multi-scale features for enhanced detection accuracy, employing a Multi-Channel Coordinated Attention (MCCA) mechanism to incorporate location, semantic, and long-range dependency information and a Multi-Dimension Feature Aggregation Module (DFAM) for cross-scale feature aggregation. Additionally, the VariFocal Loss function is utilized to assign pixel weights, enhancing detail focus and maintaining accuracy. In the dataset section, this article uses a dataset from the Northeast China Tiger and Leopard National Park, which includes images of six common offending animal species. In the comprehensive experiments on the dataset, the mAP50 index of MDA-DETR was 1.3%, 0.6%, 0.3%, 3%, 1.1%, and 0.5% higher than RT-DETR-r18, yolov8n, yolov9-C, DETR, Deformable-detr, and DCA-yolov8, respectively, indicating that MDA-DETR is superior to other advanced methods. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

21 pages, 5659 KiB  
Communication
Behavioral Coding of Captive African Elephants (Loxodonta africana): Utilizing DeepLabCut and Create ML for Nocturnal Activity Tracking
by Silje Marquardsen Lund, Jonas Nielsen, Frej Gammelgård, Maria Gytkjær Nielsen, Trine Hammer Jensen and Cino Pertoldi
Animals 2024, 14(19), 2820; https://doi.org/10.3390/ani14192820 - 30 Sep 2024
Cited by 2 | Viewed by 1789
Abstract
This study investigates the possibility of using machine learning models created in DeepLabCut and Create ML to automate aspects of behavioral coding and aid in behavioral analysis. Two models with different capabilities and complexities were constructed and compared to a manually observed control [...] Read more.
This study investigates the possibility of using machine learning models created in DeepLabCut and Create ML to automate aspects of behavioral coding and aid in behavioral analysis. Two models with different capabilities and complexities were constructed and compared to a manually observed control period. The accuracy of the models was assessed by comparison with manually scoring, before being applied to seven nights of footage of the nocturnal behavior of two African elephants (Loxodonta africana). The resulting data were used to draw conclusions regarding behavioral differences between the two elephants and between individually observed nights, thus proving that such models can aid researchers in behavioral analysis. The models were capable of tracking simple behaviors with high accuracy, but had certain limitations regarding detection of complex behaviors, such as the stereotyped behavior sway, and displayed confusion when deciding between visually similar behaviors. Further expansion of such models may be desired to create a more capable aid with the possibility of automating behavioral coding. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

10 pages, 1444 KiB  
Article
Investigation through Animal–Computer Interaction: A Proof-of-Concept Study for the Behavioural Experimentation of Colour Vision in Zoo-Housed Primates
by Vinícius Donisete Lima Rodrigues Goulart and Robert John Young
Animals 2024, 14(13), 1979; https://doi.org/10.3390/ani14131979 - 4 Jul 2024
Viewed by 1182
Abstract
Zoos are an important repository of animals, which have a wide range of visual systems, providing excellent opportunities to investigate many comparative questions in sensory ecology. However, behavioural testing must be carried out in an animal welfare-friendly manner, which is practical for zoo [...] Read more.
Zoos are an important repository of animals, which have a wide range of visual systems, providing excellent opportunities to investigate many comparative questions in sensory ecology. However, behavioural testing must be carried out in an animal welfare-friendly manner, which is practical for zoo staff. Here, we present a proof-of-concept study to facilitate behavioural research on the sensory ecology of captive primates. A system consisting of a tablet computer and an automated feeder connected wirelessly was developed and presented to captive primate species to evaluate interactions with and without previous training. A colour stimulus, analogous to the Ishihara test, was used to check the level of interaction with the device, supporting future studies on sensory ecology with zoo animals. Animals were able to use the system successfully and displayed signs of learning to discriminate between the visual stimuli presented. We identified no risk for small primates in their interactions with the experimental setup without the presence of keepers. The use of electronic devices should be approached with caution to prevent accidents, as a standard practice for environmental enrichment for larger animals (e.g., spider monkeys). In the long term, the system developed here will allow us to address complex comparative questions about the functions of different visual systems in captive animals (i.e., dichromatic, trichromatic, etc.). Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

23 pages, 7657 KiB  
Article
An Interactive Feeder to Induce and Assess Emotions from Vocalisations of Chickens
by Antonis Golfidis, Buddhamas Pralle Kriengwatana, Mina Mounir and Tomas Norton
Animals 2024, 14(9), 1386; https://doi.org/10.3390/ani14091386 - 6 May 2024
Cited by 1 | Viewed by 2274
Abstract
Understanding the emotional states of animals is a long-standing research endeavour that has clear applications in animal welfare. Vocalisations are emerging as a promising way to assess both positive and negative emotional states. However, the vocal expression of emotions in birds is a [...] Read more.
Understanding the emotional states of animals is a long-standing research endeavour that has clear applications in animal welfare. Vocalisations are emerging as a promising way to assess both positive and negative emotional states. However, the vocal expression of emotions in birds is a relatively unexplored research area. The goal of this study was to develop an interactive feeding system that would elicit positive and negative emotional states, and collect recordings of the vocal expression of these emotions without human interference. In this paper, the mechatronic design and development of the feeder is described. Design choices were motivated by the desire for the hens to voluntarily interact with the feeder and experience the different stimuli that were designed to induce (1) positive low-arousal, (2) positive high-arousal, (3) negative low-arousal, and (4) negative high-arousal states. The results showed that hens were motivated to engage with the feeder despite the risk of receiving negative stimuli and that this motivation was sustained for at least 1 week. The potential of using the interactive feeder to analyse chicken vocalisations related to emotional valence and arousal is being explored, offering a novel proof of concept in animal welfare research. Preliminary findings suggest that hens vocalised in response to all four stimulus types, with the number of vocalisations, but not the probability of vocalising, distinguishing between low- and high-arousal states. Thus, the proposed animal–computer interaction design has potential to be used as an enrichment device and for future experiments on vocal emotions in birds. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

14 pages, 5172 KiB  
Article
A Serial Multi-Scale Feature Fusion and Enhancement Network for Amur Tiger Re-Identification
by Nuo Xu, Zhibin Ma, Yi Xia, Yanqi Dong, Jiali Zi, Delong Xu, Fu Xu, Xiaohui Su, Haiyan Zhang and Feixiang Chen
Animals 2024, 14(7), 1106; https://doi.org/10.3390/ani14071106 - 4 Apr 2024
Cited by 2 | Viewed by 2041
Abstract
The Amur tiger is an important endangered species in the world, and its re-identification (re-ID) plays an important role in regional biodiversity assessment and wildlife resource statistics. This paper focuses on the task of Amur tiger re-ID based on visible light images from [...] Read more.
The Amur tiger is an important endangered species in the world, and its re-identification (re-ID) plays an important role in regional biodiversity assessment and wildlife resource statistics. This paper focuses on the task of Amur tiger re-ID based on visible light images from screenshots of surveillance videos or camera traps, aiming to solve the problem of low accuracy caused by camera perspective, noisy background noise, changes in motion posture, and deformation of Amur tiger body patterns during the re-ID process. To overcome this challenge, we propose a serial multi-scale feature fusion and enhancement re-ID network of Amur tiger for this task, in which global and local branches are constructed. Specifically, we design a global inverted pyramid multi-scale feature fusion method in the global branch to effectively fuse multi-scale global features and achieve high-level, fine-grained, and deep semantic feature preservation. We also design a local dual-domain attention feature enhancement method in the local branch, further enhancing local feature extraction and fusion by dividing local feature blocks. Based on the above model structure, we evaluated the effectiveness and feasibility of the model on the public dataset of the Amur Tiger Re-identification in the Wild (ATRW), and achieved good results on mAP, Rank-1, and Rank-5, demonstrating a certain competitiveness. In addition, since our proposed model does not require the introduction of additional expensive annotation information and does not incorporate other pre-training modules, it has important advantages such as strong transferability and simple training. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

22 pages, 4950 KiB  
Article
DiffusionFR: Species Recognition of Fish in Blurry Scenarios via Diffusion and Attention
by Guoying Wang, Bing Shi, Xiaomei Yi, Peng Wu, Linjun Kong and Lufeng Mo
Animals 2024, 14(3), 499; https://doi.org/10.3390/ani14030499 - 2 Feb 2024
Cited by 4 | Viewed by 1478
Abstract
Blurry scenarios, such as light reflections and water ripples, often affect the clarity and signal-to-noise ratio of fish images, posing significant challenges for traditional deep learning models in accurately recognizing fish species. Firstly, deep learning models rely on a large amount of labeled [...] Read more.
Blurry scenarios, such as light reflections and water ripples, often affect the clarity and signal-to-noise ratio of fish images, posing significant challenges for traditional deep learning models in accurately recognizing fish species. Firstly, deep learning models rely on a large amount of labeled data. However, it is often difficult to label data in blurry scenarios. Secondly, existing deep learning models need to be more effective for the processing of bad, blurry, and otherwise inadequate images, which is an essential reason for their low recognition rate. A method based on the diffusion model and attention mechanism for fish image recognition in blurry scenarios, DiffusionFR, is proposed to solve these problems and improve the performance of species recognition of fish images in blurry scenarios. This paper presents the selection and application of this correcting technique. In the method, DiffusionFR, a two-stage diffusion network model, TSD, is designed to deblur bad, blurry, and otherwise inadequate fish scene pictures to restore clarity, and a learnable attention module, LAM, is intended to improve the accuracy of fish recognition. In addition, a new dataset of fish images in blurry scenarios, BlurryFish, was constructed and used to validate the effectiveness of DiffusionFR, combining bad, blurry, and otherwise inadequate images from the publicly available dataset Fish4Knowledge. The experimental results demonstrate that DiffusionFR achieves outstanding performance on various datasets. On the original dataset, DiffusionFR achieved the highest training accuracy of 97.55%, as well as a Top-1 accuracy test score of 92.02% and a Top-5 accuracy test score of 95.17%. Furthermore, on nine datasets with light reflection noise, the mean values of training accuracy reached a peak at 96.50%, while the mean values of the Top-1 accuracy test and Top-5 accuracy test were at their highest at 90.96% and 94.12%, respectively. Similarly, on three datasets with water ripple noise, the mean values of training accuracy reached a peak at 95.00%, while the mean values of the Top-1 accuracy test and Top-5 accuracy test were at their highest at 89.54% and 92.73%, respectively. These results demonstrate that the method showcases superior accuracy and enhanced robustness in handling original datasets and datasets with light reflection and water ripple noise. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

18 pages, 2595 KiB  
Article
The Development of Object Recognition Requires Experience with the Surface Features of Objects
by Justin Newell Wood and Samantha Marie Waters Wood
Animals 2024, 14(2), 284; https://doi.org/10.3390/ani14020284 - 17 Jan 2024
Viewed by 1615
Abstract
What role does visual experience play in the development of object recognition? Prior controlled-rearing studies suggest that newborn animals require slow and smooth visual experiences to develop object recognition. Here, we examined whether the development of object recognition also requires experience with the [...] Read more.
What role does visual experience play in the development of object recognition? Prior controlled-rearing studies suggest that newborn animals require slow and smooth visual experiences to develop object recognition. Here, we examined whether the development of object recognition also requires experience with the surface features of objects. We raised newborn chicks in automated controlled-rearing chambers that contained a single virtual object, then tested their ability to recognize that object from familiar and novel viewpoints. When chicks were reared with an object that had surface features, the chicks developed view-invariant object recognition. In contrast, when chicks were reared with a line drawing of an object, the chicks failed to develop object recognition. The chicks reared with line drawings performed at chance level, despite acquiring over 100 h of visual experience with the object. These results indicate that the development of object recognition requires experience with the surface features of objects. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

18 pages, 2857 KiB  
Article
Developing a Preference Scale for a Bear: From “Bearly Like” to “Like Beary Much”
by Jennifer Vonk
Animals 2023, 13(9), 1554; https://doi.org/10.3390/ani13091554 - 6 May 2023
Cited by 2 | Viewed by 2166
Abstract
A preference scale for use by nonhuman animals would allow them to communicate their degree of liking for individual items rather than just relative preferences between pairs of items. It would also allow animals to report liking for images of objects that would [...] Read more.
A preference scale for use by nonhuman animals would allow them to communicate their degree of liking for individual items rather than just relative preferences between pairs of items. It would also allow animals to report liking for images of objects that would be difficult to directly interact with (e.g., potential mates and habitat modifications). Such scales can easily be presented using touchscreen technology. Few zoos have used touchscreen technology for species other than nonhuman primates. I present a description of efforts taken to create such a scale for use with a single zoo-housed American black bear (Ursus americanus). Although the bear did not reach a high level of proficiency with assigning preferred and non-preferred food items to categorical responses of “like” and “dislike,” she was able to learn how to use the like and dislike buttons differentially for a single preferred and less preferred food item and she selected the correct response button for the preferred item at above chance levels. These data contribute to our limited understanding of black bear cognition and suggest that conditional discriminations may be difficult for black bears. This finding can inform continued efforts to create a simpler tool for nonhumans to communicate their preferences to human caregivers in a more nuanced way than is currently possible. More generally, the current study contributes to the growing body of work supporting the use of touchscreen technology for providing enrichment to less studied species like bears. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

Review

Jump to: Research, Other

15 pages, 226 KiB  
Review
Monitoring Pig Structural Soundness and Body Weight in Pork Production Systems Using Computer Vision Approaches
by Ryan Jeon, Caleb Rykaczewski, Thomas Williams, William Harrington, James E. Kinder and Mark Trotter
Animals 2025, 15(5), 635; https://doi.org/10.3390/ani15050635 - 21 Feb 2025
Viewed by 594
Abstract
As the global demand for products from food-producing animals increases with greater household economic capacity, there is an increased emphasis on the development of precision technologies for monitoring the health, product production, and wellbeing of these animals. The present review focuses on pork [...] Read more.
As the global demand for products from food-producing animals increases with greater household economic capacity, there is an increased emphasis on the development of precision technologies for monitoring the health, product production, and wellbeing of these animals. The present review focuses on pork production. Using these systems is advantageous for enhancing pork production efficiency when trained personnel utilize these technologies to full capacity and have objective, automated, and uninterrupted streams of data collection. While these systems have great potential for revolutionizing food animal production, the nascent stage of computer vision in precision technology has precluded its integration into traditional agricultural practices and systems. In this review paper, there is a focus on the need to (1) evaluate the performance and effective use of computer vision technologies to collect and evaluate reliable data from pork production enterprises; and (2) focus on the current state of sensor-based animal management using a data fusion approach to monitor pig health/performance. Many of these technologies are in various stages of development; therefore, these technologies have not been integrated into pork production or other food animal producing systems. Even though the focus of this review article is on the utilization of these technologies in pork production systems, these technologies are relevant in other food animal production systems, particularly dairy and poultry production. Therefore, we describe an approach that emphasizes the important need for computational capacity and speed, edge computing, data storage and transmission, and maintaining connectivity in rural settings. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)

Other

Jump to: Research, Review

27 pages, 4899 KiB  
Conference Report
Welfare and Enrichment of Managed Nocturnal Species, Supported by Technology
by Fiona French, Paige Bwye, Laura Carrigan, Jon Charles Coe, Robert Kelly, Tiff Leek, Emily C. Lynch, Eric Mahan and Cathy Mingee
Animals 2024, 14(16), 2378; https://doi.org/10.3390/ani14162378 - 16 Aug 2024
Cited by 1 | Viewed by 2351
Abstract
This paper addresses the potential for technology to support husbandry and enrichment opportunities that enhance the welfare of zoo and sanctuary-housed nocturnal and crepuscular species. This topic was investigated through the medium of a multidisciplinary workshop (Moon Jam) that brought together species experts, [...] Read more.
This paper addresses the potential for technology to support husbandry and enrichment opportunities that enhance the welfare of zoo and sanctuary-housed nocturnal and crepuscular species. This topic was investigated through the medium of a multidisciplinary workshop (Moon Jam) that brought together species experts, zoo designers, Animal-Computer Interaction researchers and post-graduate students in collaborative discussions and design sessions. We explain the context through an examination of existing research and current practices, and report on specific challenges raised and addressed during the Moon Jam, highlighting and discussing key themes that emerged. Finally, we offer a set of guidelines to support the integration of technology into the design of animal husbandry and enrichment that support wellbeing, to advance the best practices in keeping and managing nocturnal and crepuscular animals. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

Back to TopTop