Special Issue "Animal-Centered Computing"

A special issue of Animals (ISSN 2076-2615). This special issue belongs to the section "Human-Animal Interactions, Animal Behaviour and Emotion".

Deadline for manuscript submissions: closed (31 July 2021).

Special Issue Editor

Dr. Anna Zamansky
E-Mail Website
Guest Editor
Information Systems Department, University of Haifa, Haifa 3498838, Israel
Interests: animal-centered computing; computational animal behavior analysis; animal–computer interaction; software engineering; applied logic

Special Issue Information

Dear Colleagues,

So far our understanding of animals has mainly been advanced through the human lens, and thus limited by the human cognitive abilities to perceive, process, and analyze information. Recent advances in artificial intelligence, data science, and the Internet of Things are a game-changer not only for the world of humans, but also for non-human species. There are now new exciting new possibilities for collecting and storing animal data, and using artificial intelligence and data science for its processing and analysis, empowering humans both in terms of the ability to process enormous volumes of data, and in terms of new ways of collecting and analyzing that data.

The emerging discipline of Animal-Centered Computing (ACC) applies these tools for the design and development of solutions that advance our understanding of animals, improve their welfare and well-being, and enable better communication between human and non-human animal species. In particular, ACC focuses on technologies for:

• Advancing animal science, ecology, veterinary informatics, and other related fields;

• Objectively measuring animal behavior;

• Objectively measuring parameters related to animal welfare and well-being;

• Decision support for various professionals working with animals;

• Supporting animals in tasks they perform in society;

• Enabling more effective and efficient communication between human and non-human animal species.

This Special Issue aims to bring together the latest advances in the field of ACC, which address these topics. We also welcome contributions from animal science and related fields, which apply technology, as well as position/opinion papers on novel applications of technology to advance the field of animal studies understood in a broad sense.

Dr. Anna Zamansky
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Animals is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • animal-centered computing
  • technology for animals
  • automatic animal behavior analysis
  • animal-computer interaction
  • IoT
  • machine learning
  • AI
  • data science

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Objective Video-Based Assessment of ADHD-Like Canine Behavior Using Machine Learning
Animals 2021, 11(10), 2806; https://doi.org/10.3390/ani11102806 - 26 Sep 2021
Viewed by 816
Abstract
Canine ADHD-like behavior is a behavioral problem that often compromises dogs’ well-being, as well as the quality of life of their owners; early diagnosis and clinical intervention are often critical for successful treatment, which usually involves medication and/or behavioral modification. Diagnosis mainly relies [...] Read more.
Canine ADHD-like behavior is a behavioral problem that often compromises dogs’ well-being, as well as the quality of life of their owners; early diagnosis and clinical intervention are often critical for successful treatment, which usually involves medication and/or behavioral modification. Diagnosis mainly relies on owner reports and some assessment scales, which are subject to subjectivity. This study is the first to propose an objective method for automated assessment of ADHD-like behavior based on video taken in a consultation room. We trained a machine learning classifier to differentiate between dogs clinically treated in the context of ADHD-like behavior and health control group with 81% accuracy; we then used its output to score the degree of exhibited ADHD-like behavior. In a preliminary evaluation in clinical context, in 8 out of 11 patients receiving medical treatment to treat excessive ADHD-like behavior, H-score was reduced. We further discuss the potential applications of the provided artifacts in clinical settings, based on feedback on H-score received from a focus group of four behavior experts. Full article
(This article belongs to the Special Issue Animal-Centered Computing)
Show Figures

Figure 1

Article
Deep Learning Classification of Canine Behavior Using a Single Collar-Mounted Accelerometer: Real-World Validation
Animals 2021, 11(6), 1549; https://doi.org/10.3390/ani11061549 - 25 May 2021
Cited by 1 | Viewed by 2629
Abstract
Collar-mounted canine activity monitors can use accelerometer data to estimate dog activity levels, step counts, and distance traveled. With recent advances in machine learning and embedded computing, much more nuanced and accurate behavior classification has become possible, giving these affordable consumer devices the [...] Read more.
Collar-mounted canine activity monitors can use accelerometer data to estimate dog activity levels, step counts, and distance traveled. With recent advances in machine learning and embedded computing, much more nuanced and accurate behavior classification has become possible, giving these affordable consumer devices the potential to improve the efficiency and effectiveness of pet healthcare. Here, we describe a novel deep learning algorithm that classifies dog behavior at sub-second resolution using commercial pet activity monitors. We built machine learning training databases from more than 5000 videos of more than 2500 dogs and ran the algorithms in production on more than 11 million days of device data. We then surveyed project participants representing 10,550 dogs, which provided 163,110 event responses to validate real-world detection of eating and drinking behavior. The resultant algorithm displayed a sensitivity and specificity for detecting drinking behavior (0.949 and 0.999, respectively) and eating behavior (0.988, 0.983). We also demonstrated detection of licking (0.772, 0.990), petting (0.305, 0.991), rubbing (0.729, 0.996), scratching (0.870, 0.997), and sniffing (0.610, 0.968). We show that the devices’ position on the collar had no measurable impact on performance. In production, users reported a true positive rate of 95.3% for eating (among 1514 users), and of 94.9% for drinking (among 1491 users). The study demonstrates the accurate detection of important health-related canine behaviors using a collar-mounted accelerometer. We trained and validated our algorithms on a large and realistic training dataset, and we assessed and confirmed accuracy in production via user validation. Full article
(This article belongs to the Special Issue Animal-Centered Computing)
Show Figures

Figure 1

Article
Exploring How White-Faced Sakis Control Digital Visual Enrichment Systems
Animals 2021, 11(2), 557; https://doi.org/10.3390/ani11020557 - 20 Feb 2021
Cited by 1 | Viewed by 1835
Abstract
Computer-enabled screen systems containing visual elements have long been employed with captive primates for assessing preference, reactions and for husbandry reasons. These screen systems typically play visual enrichment to primates without them choosing to trigger the system and without their consent. Yet, what [...] Read more.
Computer-enabled screen systems containing visual elements have long been employed with captive primates for assessing preference, reactions and for husbandry reasons. These screen systems typically play visual enrichment to primates without them choosing to trigger the system and without their consent. Yet, what videos primates, especially monkeys, would prefer to watch of their own volition and how to design computers and methods that allow choice is an open question. In this study, we designed and tested, over several weeks, an enrichment system that facilitates white-faced saki monkeys to trigger different visual stimuli in their regular zoo habitat while automatically logging and recording their interaction. By analysing this data, we show that the sakis triggered underwater and worm videos over the forest, abstract art, and animal videos, and a control condition of no-stimuli. We also note that the sakis used the device significantly less when playing animal videos compared to other conditions. Yet, plotting the data over time revealed an engagement bell curve suggesting confounding factors of novelty and habituation. As such, it is unknown if the stimuli or device usage curve caused the changes in the sakis interactions over time. Looking at the sakis’ behaviours and working with zoo personnel, we noted that the stimuli conditions resulted in significantly decreasing the sakis’ scratching behaviour. For the research community, this study builds on methods that allow animals to control computers in a zoo environment highlighting problems in quantifying animal interactions with computer devices. Full article
(This article belongs to the Special Issue Animal-Centered Computing)
Show Figures

Figure 1

Article
Music for Monkeys: Building Methods to Design with White-Faced Sakis for Animal-Driven Audio Enrichment Devices
Animals 2020, 10(10), 1768; https://doi.org/10.3390/ani10101768 - 30 Sep 2020
Cited by 5 | Viewed by 3056
Abstract
Computer systems for primates to listen to audio have been researched for a long time. However, there is a lack of investigations into what kind of sounds primates would prefer to listen to, how to quantify their preference, and how audio systems and [...] Read more.
Computer systems for primates to listen to audio have been researched for a long time. However, there is a lack of investigations into what kind of sounds primates would prefer to listen to, how to quantify their preference, and how audio systems and methods can be designed in an animal-focused manner. One pressing question is, if given the choice to control an audio system, would or could primates use such a system. In this study, we design an audio enrichment prototype and method for white-faced sakis that allows them to listen to different sounds in their regular zoo habitat while automatically logging their interactions. Focusing on animal-centred design, this prototype was built from low fidelity testing of different forms within the sakis’ enclosure and gathering requirements from those who care for and view the animal. This process of designing in a participatory manner with the sakis resulted in an interactive system that was shown to be viable, non-invasive, highly interactive, and easy to use in a zoo habitat. Recordings of the sakis’ interactions demonstrated that the sakis triggered traffic audio more than silence, rain sounds, zen, and electronic music. The data and method also highlight the benefit of a longitudinal study within the animals’ own environment to mitigate against the novelty effect and the day-to-day varying rhythm of the animals and the zoo environment. This study builds on animal-centred methods and design paradigms to allow the monitoring of the animals’ behaviours in zoo environments, demonstrating that useful data can be yielded from primate-controlled devices. For the Animal-Computer Interaction community, this is the first audio enrichment system used in zoo contexts within the animals own environment over a long period of time that gives the primate control over their interactions and records this automatically. Full article
(This article belongs to the Special Issue Animal-Centered Computing)
Show Figures

Figure 1

Article
Machine-Learning Techniques Can Enhance Dairy Cow Estrus Detection Using Location and Acceleration Data
Animals 2020, 10(7), 1160; https://doi.org/10.3390/ani10071160 - 08 Jul 2020
Cited by 4 | Viewed by 1427
Abstract
The aim of this study was to assess combining location, acceleration and machine learning technologies to detect estrus in dairy cows. Data were obtained from 12 cows, which were monitored continuously for 12 days. A neck mounted device collected 25,684 records for location [...] Read more.
The aim of this study was to assess combining location, acceleration and machine learning technologies to detect estrus in dairy cows. Data were obtained from 12 cows, which were monitored continuously for 12 days. A neck mounted device collected 25,684 records for location and acceleration. Four machine-learning approaches were tested (K-nearest neighbor (KNN), back-propagation neural network (BPNN), linear discriminant analysis (LDA), and classification and regression tree (CART)) to automatically identify cows in estrus from estrus indicators determined by principal component analysis (PCA) of twelve behavioral metrics, which were: duration of standing, duration of lying, duration of walking, duration of feeding, duration of drinking, switching times between activity and lying, steps, displacement, average velocity, walking times, feeding times, and drinking times. The study showed that the neck tag had a static and dynamic positioning accuracy of 0.25 ± 0.06 m and 0.45 ± 0.15 m, respectively. In the 0.5-h, 1-h, and 1.5-h time windows, the machine learning approaches ranged from 73.3 to 99.4% for sensitivity, from 50 to 85.7% for specificity, from 77.8 to 95.8% for precision, from 55.6 to 93.7% for negative predictive value (NPV), from 72.7 to 95.4% for accuracy, and from 78.6 to 97.5% for F1 score. We found that the BPNN algorithm with 0.5-h time window was the best predictor of estrus in dairy cows. Based on these results, the integration of location, acceleration, and machine learning methods can improve dairy cow estrus detection. Full article
(This article belongs to the Special Issue Animal-Centered Computing)
Show Figures

Figure 1

Article
Possibility of Autonomous Estimation of Shiba Goat’s Estrus and Non-Estrus Behavior by Machine Learning Methods
Animals 2020, 10(5), 771; https://doi.org/10.3390/ani10050771 - 29 Apr 2020
Viewed by 896
Abstract
Mammalian behavior is typically monitored by observation. However, direct observation requires a substantial amount of effort and time, if the number of mammals to be observed is sufficiently large or if the observation is conducted for a prolonged period. In this study, machine [...] Read more.
Mammalian behavior is typically monitored by observation. However, direct observation requires a substantial amount of effort and time, if the number of mammals to be observed is sufficiently large or if the observation is conducted for a prolonged period. In this study, machine learning methods as hidden Markov models (HMMs), random forests, support vector machines (SVMs), and neural networks, were applied to detect and estimate whether a goat is in estrus based on the goat’s behavior; thus, the adequacy of the method was verified. Goat’s tracking data was obtained using a video tracking system and used to estimate whether they, which are in “estrus” or “non-estrus”, were in either states: “approaching the male”, or “standing near the male”. Totally, the PC of random forest seems to be the highest. However, The percentage concordance (PC) value besides the goats whose data were used for training data sets is relatively low. It is suggested that random forest tend to over-fit to training data. Besides random forest, the PC of HMMs and SVMs is high. However, considering the calculation time and HMM’s advantage in that it is a time series model, HMM is better method. The PC of neural network is totally low, however, if the more goat’s data were acquired, neural network would be an adequate method for estimation. Full article
(This article belongs to the Special Issue Animal-Centered Computing)
Show Figures

Figure 1

Back to TopTop