sensors-logo

Journal Browser

Journal Browser

Sensors and Artificial Intelligence for Wildlife Conservation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: closed (22 February 2022) | Viewed by 43026

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Mathematics, Liverpool John Moores University, Byrom Street, Liverpool L3 3AF, UK
Interests: artificial intelligence; machine learning; deep learning; object detection; conservation; e-health
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Mathematics, Liverpool John Moores University, Byrom Street, Liverpool L3 3AF, UK
Interests: artificial intelligence; machine learning; deep learning; computer vision; technology in conservation and e-health
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce a new Special Issue on the use of Sensors and Artificial Intelligence in Wildlife Conservation. We are soliciting submissions for both review and original research articles related to the novel use of data obtained from sensors (camera traps, cameras, microphones, unoccupied vehicles (aerial, terrestrial, and aquatic)) and any other sensor platforms you think would support wildlife conservation) combined with AI, to manage and protect wildlife globally. The Special Issue is open to contributions ranging from systems that combat poaching and protect wildlife, support wildlife management and conservation, enable animal counting and tracking, monitor and protect the environment, support biodiversity assessments, monitor forest health and quality, as well as novel approaches to sensor fusion for remote sensing. Original contributions that look at integrated sensor-based technologies and wide area communications across remote sensing platforms (land, sea, air- and spaceborne) are also encouraged.  

Dr. Paul Fergus
Dr. Carl Chalmers
Prof. Dr. Serge Wich
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multispectral remote sensing
  • Hyperspectral remote sensing
  • LiDAR
  • Sensor fusion
  • Land, sea, air, space-based monitoring
  • Machine learning
  • Convolutional neural networks (1D, 2D, 3D, etc.)
  • High performance inferencing
  • Time series analysis
  • Poaching
  • Wildlife conservation
  • Animal counting
  • Forest monitoring
  • Land use/cover change
  • Environment monitoring
  • Robotics

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

11 pages, 12723 KiB  
Article
SealID: Saimaa Ringed Seal Re-Identification Dataset
by Ekaterina Nepovinnykh, Tuomas Eerola, Vincent Biard, Piia Mutka, Marja Niemi, Mervi Kunnasranta and Heikki Kälviäinen
Sensors 2022, 22(19), 7602; https://doi.org/10.3390/s22197602 - 7 Oct 2022
Cited by 9 | Viewed by 2240
Abstract
Wildlife camera traps and crowd-sourced image material provide novel possibilities to monitor endangered animal species. The massive data volumes call for automatic methods to solve various tasks related to population monitoring, such as the re-identification of individual animals. The Saimaa ringed seal ( [...] Read more.
Wildlife camera traps and crowd-sourced image material provide novel possibilities to monitor endangered animal species. The massive data volumes call for automatic methods to solve various tasks related to population monitoring, such as the re-identification of individual animals. The Saimaa ringed seal (Pusa hispida saimensis) is an endangered subspecies only found in Lake Saimaa, Finland, and is one of the few existing freshwater seal species. Ringed seals have permanent pelage patterns that are unique to each individual and that can be used for the identification of individuals. A large variation in poses, further exacerbated by the deformable nature of seals, together with varying appearance and low contrast between the ring pattern and the rest of the pelage makes the Saimaa ringed seal re-identification task very challenging, providing a good benchmark by which to evaluate state-of-the-art re-identification methods. Therefore, we make our Saimaa ringed seal image (SealID) dataset (N = 57) publicly available for research purposes. In this paper, the dataset is described, the evaluation protocol for re-identification methods is proposed, and the results for two baseline methods—HotSpotter and NORPPA—are provided. The SealID dataset has been made publicly available. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

24 pages, 3114 KiB  
Article
Understanding External Influences on Target Detection and Classification Using Camera Trap Images and Machine Learning
by Sally O. A. Westworth, Carl Chalmers, Paul Fergus, Steven N. Longmore, Alex K. Piel and Serge A. Wich
Sensors 2022, 22(14), 5386; https://doi.org/10.3390/s22145386 - 19 Jul 2022
Cited by 4 | Viewed by 2684
Abstract
Using machine learning (ML) to automate camera trap (CT) image processing is advantageous for time-sensitive applications. However, little is currently known about the factors influencing such processing. Here, we evaluate the influence of occlusion, distance, vegetation type, size class, height, subject orientation towards [...] Read more.
Using machine learning (ML) to automate camera trap (CT) image processing is advantageous for time-sensitive applications. However, little is currently known about the factors influencing such processing. Here, we evaluate the influence of occlusion, distance, vegetation type, size class, height, subject orientation towards the CT, species, time-of-day, colour, and analyst performance on wildlife/human detection and classification in CT images from western Tanzania. Additionally, we compared the detection and classification performance of analyst and ML approaches. We obtained wildlife data through pre-existing CT images and human data using voluntary participants for CT experiments. We evaluated the analyst and ML approaches at the detection and classification level. Factors such as distance and occlusion, coupled with increased vegetation density, present the most significant effect on DP and CC. Overall, the results indicate a significantly higher detection probability (DP), 81.1%, and correct classification (CC) of 76.6% for the analyst approach when compared to ML which detected 41.1% and classified 47.5% of wildlife within CT images. However, both methods presented similar probabilities for daylight CT images, 69.4% (ML) and 71.8% (analysts), and dusk CT images, 17.6% (ML) and 16.2% (analysts), when detecting humans. Given that users carefully follow provided recommendations, we expect DP and CC to increase. In turn, the ML approach to CT image processing would be an excellent provision to support time-sensitive threat monitoring for biodiversity conservation. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

16 pages, 41564 KiB  
Article
Machine-Learning Approach for Automatic Detection of Wild Beluga Whales from Hand-Held Camera Pictures
by Voncarlos M. Araújo, Ankita Shukla, Clément Chion, Sébastien Gambs and Robert Michaud
Sensors 2022, 22(11), 4107; https://doi.org/10.3390/s22114107 - 28 May 2022
Cited by 4 | Viewed by 3263
Abstract
A key aspect of ocean protection consists in estimating the abundance of marine mammal population density within their habitat, which is usually accomplished using visual inspection and cameras from line-transect ships, small boats, and aircraft. However, marine mammal observation through vessel surveys requires [...] Read more.
A key aspect of ocean protection consists in estimating the abundance of marine mammal population density within their habitat, which is usually accomplished using visual inspection and cameras from line-transect ships, small boats, and aircraft. However, marine mammal observation through vessel surveys requires significant workforce resources, including for the post-processing of pictures, and is further challenged due to animal bodies being partially hidden underwater, small-scale object size, occlusion among objects, and distracter objects (e.g., waves, sun glare, etc.). To relieve the human expert’s workload while improving the observation accuracy, we propose a novel system for automating the detection of beluga whales (Delphinapterus leucas) in the wild from pictures. Our system relies on a dataset named Beluga-5k, containing more than 5.5 thousand pictures of belugas. First, to improve the dataset’s annotation, we have designed a semi-manual strategy for annotating candidates in images with single (i.e., one beluga) and multiple (i.e., two or more belugas) candidate subjects efficiently. Second, we have studied the performance of three off-the-shelf object-detection algorithms, namely, Mask-RCNN, SSD, and YOLO v3-Tiny, on the Beluga-5k dataset. Afterward, we have set YOLO v3-Tiny as the detector, integrating single- and multiple-individual images into the model training. Our fine-tuned CNN-backbone detector trained with semi-manual annotations is able to detect belugas despite the presence of distracter objects with high accuracy (i.e., 97.05 [email protected]). Finally, our proposed method is able to detect overlapped/occluded multiple individuals in images (beluga whales that swim in groups). For instance, it is able to detect 688 out of 706 belugas encountered in 200 multiple images, achieving 98.29% precision and 99.14% recall. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

16 pages, 5582 KiB  
Article
Comparing Class-Aware and Pairwise Loss Functions for Deep Metric Learning in Wildlife Re-Identification
by Nkosikhona Dlamini and Terence L. van Zyl
Sensors 2021, 21(18), 6109; https://doi.org/10.3390/s21186109 - 12 Sep 2021
Cited by 4 | Viewed by 2348
Abstract
Similarity learning using deep convolutional neural networks has been applied extensively in solving computer vision problems. This attraction is supported by its success in one-shot and zero-shot classification applications. The advances in similarity learning are essential for smaller datasets or datasets in which [...] Read more.
Similarity learning using deep convolutional neural networks has been applied extensively in solving computer vision problems. This attraction is supported by its success in one-shot and zero-shot classification applications. The advances in similarity learning are essential for smaller datasets or datasets in which few class labels exist per class such as wildlife re-identification. Improving the performance of similarity learning models comes with developing new sampling techniques and designing loss functions better suited to training similarity in neural networks. However, the impact of these advances is tested on larger datasets, with limited attention given to smaller imbalanced datasets such as those found in unique wildlife re-identification. To this end, we test the advances in loss functions for similarity learning on several animal re-identification tasks. We add two new public datasets, Nyala and Lions, to the challenge of animal re-identification. Our results are state of the art on all public datasets tested except Pandas. The achieved Top-1 Recall is 94.8% on the Zebra dataset, 72.3% on the Nyala dataset, 79.7% on the Chimps dataset and, on the Tiger dataset, it is 88.9%. For the Lion dataset, we set a new benchmark at 94.8%. We find that the best performing loss function across all datasets is generally the triplet loss; however, there is only a marginal improvement compared to the performance achieved by Proxy-NCA models. We demonstrate that no single neural network architecture combined with a loss function is best suited for all datasets, although VGG-11 may be the most robust first choice. Our results highlight the need for broader experimentation and exploration of loss functions and neural network architecture for the more challenging task, over classical benchmarks, of wildlife re-identification. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

13 pages, 1600 KiB  
Communication
Improving Animal Monitoring Using Small Unmanned Aircraft Systems (sUAS) and Deep Learning Networks
by Meilun Zhou, Jared A. Elmore, Sathishkumar Samiappan, Kristine O. Evans, Morgan B. Pfeiffer, Bradley F. Blackwell and Raymond B. Iglay
Sensors 2021, 21(17), 5697; https://doi.org/10.3390/s21175697 - 24 Aug 2021
Cited by 17 | Viewed by 4034
Abstract
In recent years, small unmanned aircraft systems (sUAS) have been used widely to monitor animals because of their customizability, ease of operating, ability to access difficult to navigate places, and potential to minimize disturbance to animals. Automatic identification and classification of animals through [...] Read more.
In recent years, small unmanned aircraft systems (sUAS) have been used widely to monitor animals because of their customizability, ease of operating, ability to access difficult to navigate places, and potential to minimize disturbance to animals. Automatic identification and classification of animals through images acquired using a sUAS may solve critical problems such as monitoring large areas with high vehicle traffic for animals to prevent collisions, such as animal-aircraft collisions on airports. In this research we demonstrate automated identification of four animal species using deep learning animal classification models trained on sUAS collected images. We used a sUAS mounted with visible spectrum cameras to capture 1288 images of four different animal species: cattle (Bos taurus), horses (Equus caballus), Canada Geese (Branta canadensis), and white-tailed deer (Odocoileus virginianus). We chose these animals because they were readily accessible and white-tailed deer and Canada Geese are considered aviation hazards, as well as being easily identifiable within aerial imagery. A four-class classification problem involving these species was developed from the acquired data using deep learning neural networks. We studied the performance of two deep neural network models, convolutional neural networks (CNN) and deep residual networks (ResNet). Results indicate that the ResNet model with 18 layers, ResNet 18, may be an effective algorithm at classifying between animals while using a relatively small number of training samples. The best ResNet architecture produced a 99.18% overall accuracy (OA) in animal identification and a Kappa statistic of 0.98. The highest OA and Kappa produced by CNN were 84.55% and 0.79 respectively. These findings suggest that ResNet is effective at distinguishing among the four species tested and shows promise for classifying larger datasets of more diverse animals. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

24 pages, 2258 KiB  
Article
Using Machine Learning for Remote Behaviour Classification—Verifying Acceleration Data to Infer Feeding Events in Free-Ranging Cheetahs
by Lisa Giese, Jörg Melzheimer, Dirk Bockmühl, Bernd Wasiolka, Wanja Rast, Anne Berger and Bettina Wachter
Sensors 2021, 21(16), 5426; https://doi.org/10.3390/s21165426 - 11 Aug 2021
Cited by 4 | Viewed by 3057
Abstract
Behavioural studies of elusive wildlife species are challenging but important when they are threatened and involved in human-wildlife conflicts. Accelerometers (ACCs) and supervised machine learning algorithms (MLAs) are valuable tools to remotely determine behaviours. Here we used five captive cheetahs in Namibia to [...] Read more.
Behavioural studies of elusive wildlife species are challenging but important when they are threatened and involved in human-wildlife conflicts. Accelerometers (ACCs) and supervised machine learning algorithms (MLAs) are valuable tools to remotely determine behaviours. Here we used five captive cheetahs in Namibia to test the applicability of ACC data in identifying six behaviours by using six MLAs on data we ground-truthed by direct observations. We included two ensemble learning approaches and a probability threshold to improve prediction accuracy. We used the model to then identify the behaviours in four free-ranging cheetah males. Feeding behaviours identified by the model and matched with corresponding GPS clusters were verified with previously identified kill sites in the field. The MLAs and the two ensemble learning approaches in the captive cheetahs achieved precision (recall) ranging from 80.1% to 100.0% (87.3% to 99.2%) for resting, walking and trotting/running behaviour, from 74.4% to 81.6% (54.8% and 82.4%) for feeding behaviour and from 0.0% to 97.1% (0.0% and 56.2%) for drinking and grooming behaviour. The model application to the ACC data of the free-ranging cheetahs successfully identified all nine kill sites and 17 of the 18 feeding events of the two brother groups. We demonstrated that our behavioural model reliably detects feeding events of free-ranging cheetahs. This has useful applications for the determination of cheetah kill sites and helping to mitigate human-cheetah conflicts. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

24 pages, 3984 KiB  
Article
An Evaluation of the Factors Affecting ‘Poacher’ Detection with Drones and the Efficacy of Machine-Learning for Detection
by Katie E. Doull, Carl Chalmers, Paul Fergus, Steve Longmore, Alex K. Piel and Serge A. Wich
Sensors 2021, 21(12), 4074; https://doi.org/10.3390/s21124074 - 13 Jun 2021
Cited by 10 | Viewed by 4406
Abstract
Drones are being increasingly used in conservation to tackle the illegal poaching of animals. An important aspect of using drones for this purpose is establishing the technological and the environmental factors that increase the chances of success when detecting poachers. Recent studies focused [...] Read more.
Drones are being increasingly used in conservation to tackle the illegal poaching of animals. An important aspect of using drones for this purpose is establishing the technological and the environmental factors that increase the chances of success when detecting poachers. Recent studies focused on investigating these factors, and this research builds upon this as well as exploring the efficacy of machine-learning for automated detection. In an experimental setting with voluntary test subjects, various factors were tested for their effect on detection probability: camera type (visible spectrum, RGB, and thermal infrared, TIR), time of day, camera angle, canopy density, and walking/stationary test subjects. The drone footage was analysed both manually by volunteers and through automated detection software. A generalised linear model with a logit link function was used to statistically analyse the data for both types of analysis. The findings concluded that using a TIR camera improved detection probability, particularly at dawn and with a 90° camera angle. An oblique angle was more effective during RGB flights, and walking/stationary test subjects did not influence detection with both cameras. Probability of detection decreased with increasing vegetation cover. Machine-learning software had a successful detection probability of 0.558, however, it produced nearly five times more false positives than manual analysis. Manual analysis, however, produced 2.5 times more false negatives than automated detection. Despite manual analysis producing more true positive detections than automated detection in this study, the automated software gives promising, successful results, and the advantages of automated methods over manual analysis make it a promising tool with the potential to be successfully incorporated into anti-poaching strategies. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

17 pages, 9958 KiB  
Article
Wildlife Monitoring on the Edge: A Performance Evaluation of Embedded Neural Networks on Microcontrollers for Animal Behavior Classification
by Juan P. Dominguez-Morales, Lourdes Duran-Lopez, Daniel Gutierrez-Galan, Antonio Rios-Navarro, Alejandro Linares-Barranco and Angel Jimenez-Fernandez
Sensors 2021, 21(9), 2975; https://doi.org/10.3390/s21092975 - 23 Apr 2021
Cited by 15 | Viewed by 2951
Abstract
Monitoring animals’ behavior living in wild or semi-wild environments is a very interesting subject for biologists who work with them. The difficulty and cost of implanting electronic devices in this kind of animals suggest that these devices must be robust and have low [...] Read more.
Monitoring animals’ behavior living in wild or semi-wild environments is a very interesting subject for biologists who work with them. The difficulty and cost of implanting electronic devices in this kind of animals suggest that these devices must be robust and have low power consumption to increase their battery life as much as possible. Designing a custom smart device that can detect multiple animal behaviors and that meets the mentioned restrictions presents a major challenge that is addressed in this work. We propose an edge-computing solution, which embeds an ANN in a microcontroller that collects data from an IMU sensor to detect three different horse gaits. All the computation is performed in the microcontroller to reduce the amount of data transmitted via wireless radio, since sending information is one of the most power-consuming tasks in this type of devices. Multiples ANNs were implemented and deployed in different microcontroller architectures in order to find the best balance between energy consumption and computing performance. The results show that the embedded networks obtain up to 97.96% ± 1.42% accuracy, achieving an energy efficiency of 450 Mops/s/watt. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

22 pages, 1003 KiB  
Article
Behaviour Classification on Giraffes (Giraffa camelopardalis) Using Machine Learning Algorithms on Triaxial Acceleration Data of Two Commonly Used GPS Devices and Its Possible Application for Their Management and Conservation
by Stefanie Brandes, Florian Sicks and Anne Berger
Sensors 2021, 21(6), 2229; https://doi.org/10.3390/s21062229 - 23 Mar 2021
Cited by 11 | Viewed by 8310
Abstract
Averting today’s loss of biodiversity and ecosystem services can be achieved through conservation efforts, especially of keystone species. Giraffes (Giraffa camelopardalis) play an important role in sustaining Africa’s ecosystems, but are ‘vulnerable’ according to the IUCN Red List since 2016. Monitoring [...] Read more.
Averting today’s loss of biodiversity and ecosystem services can be achieved through conservation efforts, especially of keystone species. Giraffes (Giraffa camelopardalis) play an important role in sustaining Africa’s ecosystems, but are ‘vulnerable’ according to the IUCN Red List since 2016. Monitoring an animal’s behavior in the wild helps to develop and assess their conservation management. One mechanism for remote tracking of wildlife behavior is to attach accelerometers to animals to record their body movement. We tested two different commercially available high-resolution accelerometers, e-obs and Africa Wildlife Tracking (AWT), attached to the top of the heads of three captive giraffes and analyzed the accuracy of automatic behavior classifications, focused on the Random Forests algorithm. For both accelerometers, behaviors of lower variety in head and neck movements could be better predicted (i.e., feeding above eye level, mean prediction accuracy e-obs/AWT: 97.6%/99.7%; drinking: 96.7%/97.0%) than those with a higher variety of body postures (such as standing: 90.7–91.0%/75.2–76.7%; rumination: 89.6–91.6%/53.5–86.5%). Nonetheless both devices come with limitations and especially the AWT needs technological adaptations before applying it on animals in the wild. Nevertheless, looking at the prediction results, both are promising accelerometers for behavioral classification of giraffes. Therefore, these devices when applied to free-ranging animals, in combination with GPS tracking, can contribute greatly to the conservation of giraffes. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

Review

Jump to: Research

47 pages, 742 KiB  
Review
Intelligent Systems Using Sensors and/or Machine Learning to Mitigate Wildlife–Vehicle Collisions: A Review, Challenges, and New Perspectives
by Irene Nandutu, Marcellin Atemkeng and Patrice Okouma
Sensors 2022, 22(7), 2478; https://doi.org/10.3390/s22072478 - 23 Mar 2022
Cited by 5 | Viewed by 7153
Abstract
Worldwide, the persistent trend of human and animal life losses, as well as damage to properties due to wildlife–vehicle collisions (WVCs) remains a significant source of concerns for a broad range of stakeholders. To mitigate their occurrences and impact, many approaches are being [...] Read more.
Worldwide, the persistent trend of human and animal life losses, as well as damage to properties due to wildlife–vehicle collisions (WVCs) remains a significant source of concerns for a broad range of stakeholders. To mitigate their occurrences and impact, many approaches are being adopted, with varying successes. Because of their increased versatility and increasing efficiency, Artificial Intelligence-based methods have been experiencing a significant level of adoption. The present work extensively reviews the literature on intelligent systems incorporating sensor technologies and/or machine learning methods to mitigate WVCs. Included in our review is an investigation of key factors contributing to human–wildlife conflicts, as well as a discussion of dominant state-of-the-art datasets used in the mitigation of WVCs. Our study combines a systematic review with bibliometric analysis. We find that most animal detection systems (excluding autonomous vehicles) are relying neither on state-of-the-art datasets nor on recent breakthrough machine learning approaches. We, therefore, argue that the use of the latest datasets and machine learning techniques will minimize false detection and improve model performance. In addition, the present work covers a comprehensive list of associated challenges ranging from failure to detect hotspot areas to limitations in training datasets. Future research directions identified include the design and development of algorithms for real-time animal detection systems. The latter provides a rationale for the applicability of our proposed solutions, for which we designed a continuous product development lifecycle to determine their feasibility. Full article
(This article belongs to the Special Issue Sensors and Artificial Intelligence for Wildlife Conservation)
Show Figures

Figure 1

Back to TopTop