Previous Article in Journal
IoT Devices and Their Impact on Learning: A Systematic Review of Technological and Educational Affordances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

IoT and Machine Learning for Smart Bird Monitoring and Repellence: Techniques, Challenges, and Opportunities

by
Samson O. Ooko
1,2,*,
Emmanuel Ndashimye
3,
Evariste Twahirwa
1 and
Moise Busogi
3
1
African Center of Excellence in Internet of Things, College of Science and Technology, University of Rwanda, Kigali P.O. Box 3900, Rwanda
2
School of Postgraduate Studies, Adventist University of Africa, Nairobi P.O. Box 00503, Kenya
3
Department of Information Technology, Kigali Innovation City, Carnegie Mellon University Africa, Bumbogo BP 6150, Kigali, Rwanda
*
Author to whom correspondence should be addressed.
IoT 2025, 6(3), 46; https://doi.org/10.3390/iot6030046 (registering DOI)
Submission received: 10 July 2025 / Revised: 31 July 2025 / Accepted: 4 August 2025 / Published: 7 August 2025

Abstract

The activities of birds present increasing challenges in agriculture, aviation, and environmental conservation. This has led to economic losses, safety risks, and ecological imbalances. Attempts have been made to address the problem, with traditional deterrent methods proving to be labour-intensive, environmentally unfriendly, and ineffective over time. Advances in artificial intelligence (AI) and the Internet of Things (IoT) present opportunities for enabling automated real-time bird detection and repellence. This study reviews recent developments (2020–2025) in AI-driven bird detection and repellence systems, emphasising the integration of image, audio, and multi-sensor data in IoT and edge-based environments. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework was used, with 267 studies initially identified and screened from key scientific databases. A total of 154 studies met the inclusion criteria and were analysed. The findings show the increasing use of convolutional neural networks (CNNs), YOLO variants, and MobileNet in visual detection, and the growing use of lightweight audio-based models such as BirdNET, MFCC-based CNNs, and TinyML frameworks for microcontroller deployment. Multi-sensor fusion is proposed to improve detection accuracy in diverse environments. Repellence strategies include sound-based deterrents, visual deterrents, predator-mimicking visuals, and adaptive AI-integrated systems. Deployment success depends on edge compatibility, power efficiency, and dataset quality. The limitations of current studies include species-specific detection challenges, data scarcity, environmental changes, and energy constraints. Future research should focus on tiny and lightweight AI models, standardised multi-modal datasets, and intelligent, behaviour-aware deterrence mechanisms suitable for precision agriculture and ecological monitoring.

1. Introduction

Birds cause damage to crops during the period before harvesting at a global level, leading to huge losses exceeding billions of dollars yearly [1]. The birds feed on seeds, fruits, leaves, and grains, reducing crop yields and threatening food security for many communities [2]. This not only impacts farmers’ livelihoods but also raises concerns about the availability of food for families relying on those crops [3]. In the East African region, small-scale farmers face a battle each season as bird infestations take a significant toll on their cereal crops, with losses exceeding 20 percent of the produce [4,5]. Farmers apply traditional bird control methods [6], which include the use of propane cannons, reflective tapes, and physical controls using nets and scarecrows [7]. Although these measures can lead to improved yields, their effectiveness is dependent on the stage of crop growth and duration of application [8]. Traditional methods are time-consuming, labour-intensive, and often ineffective unless applied consistently throughout the day. In addition, methods that involve the use of chemicals result in environmental risks. The birds are also able to gradually adapt to static methods like the use of scarecrows, making them lose effectiveness over time. The practicability of using manual surveillance of large farms is also a problem. Given the challenges, the traditional bird control methods are inadequate for repelling birds in large-scale farms.
Recent advances in Artificial Intelligence (AI) [9], specifically in computer vision and machine learning, supported by the growing adoption of the Internet of Things (IoT) technologies [10,11], present opportunities for automating bird detection and repellence [12,13]. These technologies can be used to detect, classify, and respond to bird activity in real time using edge computing devices, such as drones, camera traps, and smart sensors [14,15,16]. In addition, AI-powered repellence techniques, which may include adaptive sound emitters and automated lasers to predator-mimicking drones, are gaining popularity as more effective alternatives to the traditional deterrent methods [17,18]. Despite growing interest and the need for a real-time solution, the research landscape on AI and IoT-based bird detection and repellence remains fragmented [19]. Previous studies focus on isolated components such as classification models, wireless sensor deployment, or acoustic deterrents without synthesising the full pipeline, from data collection and model selection to edge deployment and deterrence actuation [20,21,22]. The growing volume of literature presents a challenge for practitioners and researchers trying to navigate the field.
As an initial step to addressing the gaps in this area of study, this paper presents a systematic review of recent studies (2020–2025) in AI-enabled bird detection and deterrence. Using the PRISMA framework, 154 peer-reviewed studies that leverage machine learning, computer vision, and IoT infrastructure for avian monitoring were identified and analysed from the initial 267. Unlike prior reviews [7,23,24], this study evaluates the full pipeline—from data collection and model architecture to deployment platforms and smart deterrence technologies. This review not only classifies the models and architectures used but also evaluates their real-world deployment readiness, data requirements, edge processing strategies, and integration with repellence technologies.
The key contributions of the study are as follows:
  • Machine learning techniques applied in bird detection are categorised and mapped, highlighting trends in lightweight models and edge compatibility.
  • Dataset types, collection methods, and preprocessing techniques used in training detection models are reviewed.
  • IoT architectures and communication protocols are evaluated, identifying strengths and limitations in cloud-based systems.
  • An analysis of bird repellence methods and their integration with intelligent detection systems is provided.
  • Key challenges are identified, and future research directions for building scalable, adaptive bird management systems are proposed.
By addressing these gaps, this review aims to support researchers, developers, and policymakers in designing effective, AI-powered solutions for sustainable bird monitoring and control.

2. Materials and Methods

The study followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to review the literature on IoT and machine learning for bird detection and repellence. This ensured a replicable and unbiased selection of relevant studies. The process consisted of four main steps as presented in Figure 1: Identification, Screening, Eligibility, and Inclusion.

2.1. Identification

The first step was identifying relevant studies. A comprehensive search was conducted in IEEE Xplore, ScienceDirect, Springer, and Google Scholar, covering studies published after 2020. The search included terms such as
  • “Machine Learning + Bird Detection”
  • “Machine Learning + Bird Repellence”
  • “Computer Vision + Bird Detection”
  • “Acoustic Bird Detection”
  • “IoT + Bird Repellence”
  • “Artificial Intelligence + Bird Detection + Repellence”
These search terms helped capture a wide range of studies, from advanced edge computing applications to real-world agricultural use cases. This initial search retrieved 267 articles for potential inclusion.

2.2. Screening

With the initial pool of studies collected, the next phase was the screening phase. First, automated tools were used to eliminate duplicate entries, reducing the dataset to 253 papers. Then, a manual screening of the titles and abstracts was carried out to ensure that only studies directly related to our research topic were considered. Studies were excluded if they lacked machine learning applications, did not involve IoT technologies, or focused on general wildlife monitoring without specific reference to birds. After screening, 248 papers were identified for potential review, out of which 10 were not retrieved.

2.3. Eligibility

The full texts of the remaining 238 papers were manually reviewed to check if they met our predefined inclusion and exclusion criteria as presented in Table 1.

2.4. Inclusion

After applying the eligibility criteria, 154 papers were selected for data extraction and analysis. These studies provided insights into various aspects of IoT-based bird detection and repellence, including dataset characteristics, machine learning techniques, hardware implementations, model performance, and real-world applications. The extracted data focused on key themes such as datasets, machine learning algorithms, IoT architectures, connectivity, edge-based deployments, bird repellence techniques, and implementation challenges. The findings were synthesised to identify trends, gaps, and opportunities for further research.

3. Computer Vision-Based Detection

3.1. Datasets

The reliability and accuracy of AI models for bird detection and monitoring largely depend on the quality and diversity of the datasets used [25]. Bird datasets are highly dynamic, as birds move across varying environments with changes in lighting, weather, and habitat conditions. Therefore, researchers need large, well-annotated datasets that represent different species, behaviours, and environmental factors to build effective detection systems [26,27]. The datasets used are often visual or sensor-based, with some models employing a combination of two or three different datasets to enhance model performance [28]. Collecting the necessary data requires careful selection of collection methods, data sources, and preprocessing techniques to ensure the datasets are not only comprehensive but also suitable for training robust detection models. Table 2 gives a summary of the analysed datasets.
Different data collection methods have been applied in various bird detection studies depending on the research goals and the available technology. The most common methods from the reviewed studies include: video surveillance [31], image-based methods [29], motion detection [37], and multi-sensor approaches [54]. The data used are collected mainly by use of cameras, which are considered to be affordable and easy to deploy, PIR sensors for automated detection systems, drones, radar, ultrasonic sensors, and microphones. Publicly available datasets and GPS data are also used for training models and to allow researchers to cross-validate findings with existing data.
The size of a dataset can impact the model’s performance. In the reviewed studies, dataset sizes ranged from fewer than 1000 images to over 300,000 samples. From the reviewed studies, it was observed that models trained on larger datasets tended to report higher accuracy and generalizability, when diverse environmental conditions and multiple bird species were represented. However, diminishing returns were also evident beyond a certain threshold. For example, some studies using datasets over 150,000 samples did not significantly outperform those trained on smaller ones, suggesting that data quality and diversity may be more critical than volume. Furthermore, studies working with fewer than 5000 samples often relied heavily on data augmentation and transfer learning to maintain acceptable performance. We estimate that a minimum of 10,000–20,000 well-annotated images or equivalent audio segments is required to achieve decent model accuracy in most bird detection scenarios when combined with strong preprocessing and augmentation pipelines.
Some studies reported dataset sizes in alternative formats, such as hours of video footage rather than individual image counts. Most studies relied on custom datasets [50], demonstrating the need for specialised data collection. Only a few studies use widely available datasets such as the Kaggle datasets and the COCO dataset. Other specialised datasets were also used, for example, NIPS4Bplus, Xeno Canto, and warblrb10k. The heavy reliance on custom datasets suggests that existing datasets may not always meet the specific requirements of bird detection models, especially when dealing with regional species or unique environmental conditions. While large datasets improve model robustness, many studies still rely on relatively small collections, making data augmentation essential. The lack of publicly available datasets suggests a strong need for more open-source contributions to the field.
Raw data alone is rarely sufficient for training machine learning models. Researchers apply preprocessing techniques to improve data quality and enhance model accuracy. The most commonly reported preprocessing methods were: annotation [34], data augmentation [38], image resizing and scaling [45], frame extraction [39], and feature extraction [53]. The variety of data collection methods, dataset sizes, and preprocessing techniques indicates that bird detection research is still evolving.

3.2. Machine Learning Models

Machine learning models come in different architectures, each designed for specific strengths in detecting, classifying, and tracking objects. The studies reviewed used a variety of models, with some being applied in most studies. Table 3 provides a summary of the ML models used in different studies.
CNNs were the most frequently used model type. CNNs work by detecting patterns such as edges, textures, and shapes layer by layer, making them highly effective for image recognition tasks [66,67,68]. Variants like ResNet (Residual Networks) enhance CNNs by allowing deep networks to train more effectively without losing important details [69]. YOLO (You Only Look Once) models appeared in many studies [70,71], too. Unlike CNNs, which process an image in sections, YOLO treats the entire image as a single input, enabling real-time object detection [72]. Several versions of YOLO were used in the reviewed studies [73], with improvements in detection speed and accuracy. Faster R-CNN is widely recognised for its high accuracy in object detection tasks. Unlike YOLO, which prioritises speed, Faster R-CNN processes images in multiple steps, refining its predictions to improve precision [74]. This makes it a better choice for tasks where detection accuracy is more important than speed. MobileNet is used in low-power, edge-based applications. Unlike traditional CNNs, which require significant computational power, MobileNet is optimised to run on mobile devices, IoT sensors, and embedded systems [75]. VGG, Inception, and EfficientNet have deep learning capabilities but tend to require high computational resources and are not frequently used in bird detection. Traditional models like K-nearest neighbors (KNN), hidden Markov models (HMM), and support vector machines (SVM) have also been explored; these methods are often used as benchmarks but are generally less effective for large-scale image analysis [76]. Some studies experimented with combinations of architectures, demonstrating that integrating multiple models can improve performance [77].
Accuracy was the most commonly reported metric, with the performance breakdown showing that most models achieved strong results, with most studies reporting accuracies ranging from 80 to 95 percent, confirming the effectiveness of deep learning models for detection and classification. In addition, precision was used to measure how many of the detected objects were correct, with studies reporting precision above 0.080–0.90. Recall was also used for measuring how well models detected all relevant objects in an image, with values ranging from 0.65 to above 0.95, with higher values indicating fewer missed detections. Mean Average Precision (mAP) was used, offering a balanced view of precision and recall. The reported mAP ranged between 70 and 90 percent, meaning highly effective detection. Frames Per Second (FPS) was reported in 6 studies. The reported speeds are between 1 and 60 FPS, which is still usable but may not be ideal for fast-moving objects. Figure 2 gives a comparison of the precisions from different models.
As AI moves towards real-world applications, the ability to run models on edge devices (such as Raspberry Pi, smartphones, or IoT sensors) is becoming increasingly important. Some studies reported edge-compatible models, showing that lightweight architectures like MobileNet and optimised CNNs are becoming more viable for real-world use.

4. Acoustic-Based Detection

As presented in Table 4, the most common hardware used for audio data collection are the AudioMoths and ARUs, and other sound sensors. The AudioMoth has a detection range of 801–900 m, while other devices have a range of below 200 m, making an AudioMoth appropriate for large-scale projects. The microcontrollers and processors used include STM32, Arduino, ESP32, and ARM, with Raspberry Pi and Jetson Nano also being used in a few studies.
The CNN-based models, including variants like ResNet, were commonly used with other approaches, including MLP, SVM, Transformer, VAE, and DTW.

5. Connectivity

Once data are captured, they must be transmitted efficiently. Studies reviewed indicate a mix of wired and wireless communication protocols, with a strong preference for wireless due to flexibility and scalability. The common wireless communication technologies used include.
  • Wi-Fi—This enables high-speed data transfer and has been applied in several studies. However, it has a limited range and high power consumption, making it unsuitable for large-scale, battery-powered networks.
  • LoRa (Long Range, Low Power)—This has also been used and is ideal for IoT applications in agriculture and environmental monitoring due to its long range and low power needs. However, the low data rate makes it less suitable for applications requiring high-resolution image or video transmission.
  • Cellular Networks (4G/LTE, 5G)—This has been used to provide seamless connectivity, especially for mobile IoT devices. However, high cost and energy consumption make it impractical for many large-scale IoT applications.
  • Zigbee—Very low power consumption, low cost, and well-suited for mesh networks in local IoT setups. Shorter range compared to LoRa and Cellular, not suitable for high-data applications like images or videos
No single communication technology meets all IoT requirements. Studies highlight trade-offs between long-range connectivity, power efficiency, and data transfer speed. Hybrid communication approaches, for example, combining LoRa for low-power sensing and Wi-Fi for bulk data uploads, can optimise performance. Table 5 presents a comparison of the connectivity options.

6. IoT Implementation Architectures

The way sensor data are processed and stored significantly impacts system efficiency. The reviewed studies revealed two dominant architectures:

6.1. Cloud-Based Architectures

Sensors send raw data to cloud servers for storage, processing, and analysis for long-term decision-making [92]. This architecture is scalable and supports advanced machine learning models but requires high bandwidth and has high latencies in case of poor connection.

6.2. Edge Computing

Sensors transmit data to a nearby edge device (e.g., Raspberry Pi, NVIDIA Jetson) for local processing before sending key insights to the cloud. The use of this architecture reduces latency and bandwidth usage and is ideal for real-time applications [93]. The edge devices have limited processing power and storage capacity. Bird detection systems have increasingly leveraged edge computing to enable real-time, efficient, and autonomous monitoring [94,95,96]. By processing data closer to the source—on the edge—these systems can reduce latency, minimise bandwidth usage, and operate effectively even in remote environments. Various edge devices have been explored in bird detection studies;
  • Microcontrollers (ESP32, ATmega328, etc.)—These low-power devices are ideal for lightweight processing tasks but struggle with deep-learning models due to limited computational capacity [97].
  • Single-board computers (Raspberry Pi, Jetson boards)—More powerful than microcontrollers and commonly used in edge-based implementations, these devices can handle more complex computations but consume more power and are more costly [30].
  • FPGA-based solutions—While highly efficient for real-time processing, FPGA implementations are less common due to their complexity and cost. Deploying machine learning models at the edge requires balancing of performance, power efficiency, and resource constraints. The reviewed studies explored several optimisation strategies:
  • Lightweight models—MobileNet and optimised YOLO variants are frequently used due to their efficiency in object detection tasks.
  • Transfer learning—Adapting pre-trained models allows for reduced computational overhead while maintaining high accuracy [98].
  • Model compression—Techniques such as pruning and quantization reduce model size to suit resource-constrained devices [99].
TinyML—machine learning optimised for microcontrollers—has emerged as a promising approach for bird detection in energy-constrained environments. Studies have explored several techniques to make TinyML viable, including employing partial convolution and quantization to optimise TinyML models [84] and a lightweight CNN with fewer than 100,000 parameters, reducing memory consumption [90]. To maximise efficiency, TinyML-based bird detection relies on
  • Pruning and quantization—Reducing model complexity without significantly impacting accuracy.
  • Power-saving techniques—Using sleep modes and efficient RAM allocation in microcontrollers.
  • Local data processing—Minimizing the need for network communication to save power.

7. Bird Repellence Methods

Bird deterrence is an essential aspect of bird detection, especially in agricultural, conservation, and aviation settings [100]. Various techniques exist to prevent birds from interfering with crops, equipment, or infrastructure. These methods range from simple sound-based solutions to advanced AI-driven adaptive deterrence, as presented in Table 6.
Sound-based deterrents use loud noises, ultrasonic frequencies, or bioacoustic calls (e.g., distress signals of birds) to scare birds away. These work well initially, but birds may habituate over time, reducing the long-term impact. Visual deterrent methods include flashing lights, predator-mimicking drones, bird-scaring lines, and laser systems. These can be highly effective, especially when mimicking natural predators, but practical limitations exist for large areas. Integrated approaches by combining sound, visual, and AI-driven adaptive systems enhance long-term effectiveness. These are promising, but studies suggest more long-term trials are needed.

8. Discussion

8.1. Cost-Benefit Trade-Offs

Beyond technical performance, cost-benefit trade-offs are essential when evaluating the practical feasibility of IoT-based bird detection and deterrence, particularly for smallholder farmers. While traditional methods such as scarecrows, noise cannons, and reflective tapes have low upfront costs, their effectiveness diminishes over time due to habituation, and they require constant human input. IoT-based systems, on the other hand, involve a higher initial investment—ranging from 50–200 dollars for sensor-based setups and up to 700 dollars for advanced edge AI models using visual or acoustic detection. However, these solutions provide long-term automation, reduce labour costs, and adapt dynamically to bird presence, with studies reporting crop damage reductions exceeding 70 percent in some cases [30,55,60]. When amortised over multiple seasons, they can yield a positive return on investment for medium- and large-scale farms. For budget-constrained users, however, these systems may still seem prohibitive. This highlights the need for developing ultra-low-cost open-source versions and potentially leveraging community or cooperative financing models.

8.2. Comparative Analysis of Model Architectures and Trade-Offs

While CNNs, YOLO variants, and MobileNet architectures are widely used in bird detection systems, they exhibit different trade-offs depending on use-case requirements. YOLO models such as YOLOv4-tiny and YOLOv5-medium have demonstrated high-speed detection capabilities and favourable mAP scores of up to 94 percent in real-time applications [16,30,33]. However, their performance tends to degrade in scenarios involving small birds, cluttered backgrounds, or occlusions [28,50].
Faster R-CNN models offer higher accuracy and better bounding box localization, making them suitable for tasks that prioritise detection precision over speed [22,31,48]. These models, however, have longer inference times and require more memory, which limits their deployment on edge devices. In contrast, MobileNetV2 and other lightweight CNNs are optimised for low-power environments. MobileNetV2 achieved test accuracies of up to 95 percent, with real-time performance near 80 percent in some studies [58]. This makes them well-suited for embedded systems such as Raspberry Pi or ESP32 boards in edge computing scenarios [39,84]. However, these lightweight models often experience a reduction in accuracy, particularly under variable lighting or fast motion conditions [56].
In addition, deployment settings influence model selection. For example, drone-mounted bird detection systems may require models that can operate at 6–24 FPS with balanced energy efficiency, while fixed-camera traps may afford higher latency for better accuracy. Some studies show that combining models can improve performance when handling multiple bird classes or environmental noise [52,77].
These observations suggest the need for a deployment-aware model selection framework that considers energy consumption, detection accuracy, latency, and ease of training.

8.3. Ethical and Ecological Considerations of Sound-Based Repellents

While sound-based repellents are frequently highlighted as eco-friendly alternatives to chemical deterrents [55,60,101], their long-term ecological impacts remain underexplored. Repeated or intense acoustic stimuli—including predator calls or distress signals—can lead to stress-induced behaviours, territory abandonment, or altered migratory patterns in non-target bird species and small mammals [17,18]. Studies in conservation biology have warned that such disturbances may inadvertently reduce biodiversity in areas where acoustic deterrents are overused or misapplied [7,8]. Moreover, some birds may habituate to repeated sounds, rendering the system ineffective over time [30]. Others may displace their feeding or nesting to neighbouring areas, creating ecological imbalances and shifting the problem rather than resolving it.
From a regulatory standpoint, many countries enforce restrictions on noise levels in agricultural and conservation zones. Devices such as bioacoustic emitters or ultrasonic deterrents must comply with national environmental protection laws, especially if deployed near nature reserves or migratory corridors. It is essential for future systems to incorporate ecological assessments that track the behaviour of both target and non-target species over time. Multi-disciplinary research involving ornithologists, ecologists, and engineers is needed to establish ethical usage guidelines and quantify biodiversity impacts through long-term monitoring.

8.4. Challenges in Bird Detection and Repellence Systems

8.4.1. Detecting Small and Distant Birds with High Accuracy

Birds, especially smaller species, are difficult to detect at long distances, making early interventions a challenge. Several studies [29,31,50,56] revealed that YOLOv3, YOLOv4, and Faster R-CNN models were characterised by reduced performance for small or distant birds, especially in cluttered or aerial backgrounds. YOLOv3-tiny achieved higher speed but compromised on accuracy. SMB-YOLOv5 and MFNet-L [28,59] offered better performance in some cases, but high precision across variable distances remains unresolved. These findings confirm the need for selecting or designing model architectures that can scale in both resolution and object size, indicating a gap for future exploration of hybrid or transformer-based models.

8.4.2. Environmental Variability and Real-Time Adaptation

AI models must function reliably under changing environmental conditions, such as varying light, rain, fog, or nighttime scenarios. Studies [30,50] showed performance degradation of YOLOv5 and CenterNet in adverse environments, while others [52] explored sensor fusion—combining radar with camera—to improve reliability. However, a fully robust solution for dynamic environmental conditions has not been established, and the literature suggests more work is needed to evaluate model behaviour across seasons, weather types, and sensor types in real-time conditions.

8.4.3. Energy Efficiency and Computational Constraints on Edge Devices

Low-power deployment is essential for remote bird detection. Studies using MobileNet, TinyML frameworks, and optimised CNNs [55,84,90] achieved good accuracy, above 90 percent, on microcontrollers like Arduino and STM32. However, these models required significant compression, pruning, or quantization to run efficiently. Trade-offs between model depth and inference time remain, and few studies successfully implemented multi-modal real-time detection within the memory limits of ultra-low-power devices.

8.4.4. Managing Data Collection, Storage, and Transmission

While large datasets improve model accuracy, most reviewed studies relied on custom datasets [28,29,50], and only a few used open-source options like COCO, Kaggle, or CUB-200-2011. This makes it difficult to benchmark models or validate across regions and species. Furthermore, high-resolution image and acoustic data require robust communication pipelines. Only limited efforts [52,53] adopted hybrid communication architectures, suggesting a need for better integration of low-bandwidth transmission strategies in edge-AI applications.

8.4.5. Reducing False Positives and Enhancing Species-Specific Identification

False positives remain a challenge, especially in distinguishing birds from drones, insects, or moving foliage. Studies [22,56,60] reported high accuracy in general object detection but noted issues with species-specific identification and false alarms. BirdNET-based models [78,85] showed better precision in classifying specific species from audio inputs. However, there is still a performance gap for real-time, multi-species, and multi-modal classification, particularly in open-field environments where noise and background interference are common.

8.5. Opportunities with AI in Bird Detection

8.5.1. Deploying Low-Power, AI-Driven Edge Computing Solutions

TinyML and low-power AI frameworks have enabled successful deployments on microcontrollers such as STM32 and Arduino Nano BLE. DenseNet201 and lightweight CNNs are used in [55,84,88] demonstrated accuracies above 90 percent on low-resource hardware. These advances show that real-time bird detection at the edge in remote agricultural zones is increasingly viable, though further optimisation for multi-sensor input is still needed.

8.5.2. Multi-Sensor Fusion for Enhanced Detection Accuracy

Studies [52,53] employing radar-camera fusion and acoustic-visual data integration showed improved detection rates compared to single-sensor systems. These results confirm that sensor fusion is a promising solution for addressing limitations caused by environmental variability, sensor blind spots, or detection latency. Despite this promise, few studies have evaluated long-term deployment costs or robustness across seasons, leaving room for further experimentation.

8.5.3. Adaptive AI Models for Self-Learning and Context Awareness

The ability for models to adapt to specific environments or bird species dynamically remains underexplored. Although some studies applied transfer learning [45,58], most used static, pre-trained models. Adaptive approaches such as on-device learning or federated model updates could improve long-term accuracy in real deployments. The lack of studies on continual learning in this domain presents a notable opportunity for future research.

8.5.4. Energy-Efficient Model Optimization for Scalability

Optimisation techniques such as pruning, quantization, and lightweight network design were applied in studies [28,39,84] to enhance edge compatibility. MFNet-L and MobileNetV2 demonstrated high inference speeds with acceptable accuracy, suitable for real-time processing in field conditions. These methods not only enable sustainable, battery-operated deployments but also reduce the environmental and financial costs of wide-scale rollouts.

8.6. Future Research Directions

To further advance AI-driven bird detection and repellence systems, future research should focus on:
  • Developing ultra-lightweight, high-accuracy AI models—Improving TinyML capabilities to deliver high accuracy with reduced computational load.
  • Enhancing automated data collection and labelling—Creating standardised, open-source datasets for training and benchmarking bird detection models. Future datasets for bird detection and repellence systems should include at least 500–1000 samples per bird species to support balanced model training and avoid overfitting. For multi-class detection, datasets should represent a diverse mix of regional and migratory species under varying environmental conditions (e.g., lighting, weather, background clutter). Visual data should be collected at a minimum resolution of 640 × 480 pixels, with high-resolution options (e.g., 1280 × 720) preferred for detecting smaller or more distant birds. For acoustic data, we recommend a sampling rate of at least 44.1 kHz, segmented into labelled clips of 5–10 s, to balance memory efficiency and signal richness. Including location metadata, species labels, call types, and time-of-day stamps will further support multi-context training. We also encourage the inclusion of synchronised visual-acoustic recordings where possible and the development of open-source annotation tools to reduce dataset creation barriers for new researchers.
  • Designing self-learning AI models—Implementing on-device adaptation to reduce reliance on cloud retraining and improve real-time responsiveness.
  • Exploring AI-driven, species-specific repellence techniques—Using behaviour-based deterrence strategies that dynamically adapt to different bird species.
  • Integrating bird detection into broader smart agriculture and urban management systems—Ensuring AI-driven bird monitoring complements existing environmental and precision farming technologies.

9. Conclusions

This review highlights the growing potential of integrating machine learning and IoT technologies for smart bird detection and repellence. The studies reviewed illustrate progress in applying advanced computer vision and acoustic models for accurate bird identification across diverse environments. The adoption of edge computing and TinyML frameworks further demonstrates the feasibility of deploying real-time, energy-efficient solutions in remote and resource-constrained areas. Multi-modal sensor fusion and adaptive AI-driven repellence strategies offer promising directions for increasing system robustness and effectiveness. Despite these advancements, key challenges remain. These include limited availability of standardised datasets, species-specific detection issues, environmental variability, power constraints, and the need for scalable, low-latency deployment architectures. Addressing these issues will require interdisciplinary collaboration, innovation in low-power AI model design, and the development of open-access datasets tailored to ecological and agricultural contexts. Future research must focus on building intelligent, self-adaptive systems that can evolve with changing environmental conditions and bird behaviours. Integrating these solutions within broader smart agriculture and urban management ecosystems will be critical for sustainable environmental stewardship, improved crop protection, and minimised human-wildlife conflict.

Author Contributions

Conceptualization, S.O.O., E.N., E.T. and M.B.; methodology, S.O.O., E.N., E.T. and M.B.; formal analysis, S.O.O., E.N., E.T. and M.B.; investigation, S.O.O., E.N., E.T. and M.B.; writing—original draft preparation, S.O.O.; writing—review and editing, E.N., E.T. and M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by The African Engineering and Technology Network (Afretec), a pan-African collaboration consisting of technology-centric universities across Africa.

Data Availability Statement

No new data were created.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
IoTInternet of Things
CNNConvolutional Neural Network
YOLOYou Only Look Once
MLMachine Learning
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
FPSFrames Per Second
mAPMean Average Precision
ARUAutonomous Recording Unit
SVMSupport Vector Machine
VAEVariational Autoencoder
DTWDynamic Time Warping
MCUMicrocontroller Unit
FPGAField-Programmable Gate Array
TinyMLTiny Machine Learning
Wi-FiWireless Fidelity
LoRaLong Range
BLEBluetooth Low Energy
ANNArtificial Neural Network

References

  1. Bhavana, C.; Yashaswini, M.; Scholar, M.T. Artificial Intelligence Based Birds and Animal Detection and Alert System. 2022. Available online: https://www.ijcrt.org/papers/IJCRT2208058.pdf (accessed on 5 May 2025).
  2. Shahbazi, F.; Shahbazi, S.; Zare, D. Losses in agricultural produce: Causes and effects on food security. Food Energy Secur. 2025, 14, e70086. [Google Scholar] [CrossRef]
  3. Shim, K.; Barczak, A.; Reyes, N.; Ahmed, N. Small mammals and bird detection using IoT devices. In Proceedings of the International Conference Image and Vision Computing New Zealand, Tauranga, New Zealand, 9–10 December 2021. [Google Scholar] [CrossRef]
  4. Aumen, A.; Gagliardi, G.; Kinkead, C.; Nguyen, V.; Smith, K.; Gershenson, J. Influence of the Red-Billed Quelea Bird on Rice Farming in the Kisumu, Kenya Region. In Proceedings of the IEEE Global Humanitarian Technology Conference, Radnor, PA, USA, 23–26 October 2024; pp. 64–71. [Google Scholar] [CrossRef]
  5. Huang, C.; Zhou, K.; Huang, Y.; Fan, P.; Liu, Y.; Lee, T.M. Insights into the coexistence of birds and humans in cropland through meta-analyses of bird exclosure studies, crop loss mitigation experiments, and social surveys. PLoS Biol. 2023, 21, e3002166. [Google Scholar] [CrossRef]
  6. Mirugwe, A.; Nyirenda, J.; Dufourq, E. Automating Bird Detection Based on Webcam Captured Images Using Deep Learning. In Proceedings of the 43rd Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT 2022), Gerber, A., Ed.; EPiC Series in Computing; Volume 85, EasyChair: 2022. pp. 62–76. Available online: https://easychair.org/publications/paper/8gh7 (accessed on 6 May 2025).
  7. Pruteanu, A.; Vanghele, N.; Cujbescu, D.; Nitu, M.; Gageanu, I. Review of Effectiveness of Visual and Auditory Bird Scaring Techniques in Agriculture. In Proceedings of the 22nd International Scientific Conference “Engineering for Rural Development”, Jelgava, Latvia, 24–26 May 2023; Volume 22, pp. 275–281. [Google Scholar] [CrossRef]
  8. Agossou, H.; Assede, E.P.S.; Dossou, P.J.; Biaou, S.H. Effect of bird scaring methods on crop productivity and avian diversity conservation in agroecosystems of Benin. Int. J. Biol. Chem. Sci. 2022, 16, 527–542. [Google Scholar] [CrossRef]
  9. González, H.; Vera, A.; Valle, D. Design of an Artificial Vision System to Detect and Control the Presence of Black Vultures at Airfields. In Proceedings of the International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), Trichy, India, 24–26 November 2022; pp. 589–597. [Google Scholar] [CrossRef]
  10. Albanese, A.; d’Acunto, D.; Brunelli, D. Pest detection for Precision Agriculture based on IoT Machine Learning application. In International Conference on Applications in Electronics Pervading Industry, Environment and Society; Springer: Berlin/Heidelberg, Germany, 2019; pp. 65–72. [Google Scholar]
  11. Pillai, A.S.; Sathvik, S.U.A.; Shibu, N.B.S.; Devidas, A.R. Monitoring Urban Wetland Bird Migrations using IoT and ML Techniques. In Proceedings of the IEEE 9th International Conference for Convergence in Technology (I2CT), Pune, India, 5–7 April 2024. [Google Scholar] [CrossRef]
  12. Cardoso, B.; Silva, C.; Costa, J.; Ribeiro, B. Internet of Things Meets Computer Vision to Make an Intelligent Pest Monitoring Network. Appl. Sci. 2022, 12, 9397. [Google Scholar] [CrossRef]
  13. Bruggemann, L.; Schutz, B.; Aschenbruck, N. Ornithology meets the IoT: Automatic Bird Identification, Census, and Localization. In Proceedings of the 7th IEEE World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA, 14 June–31 July 2021; pp. 765–770. [Google Scholar] [CrossRef]
  14. Yuliana, M.; Fitrah, I.C.; Hadi, M.Z.S. Intelligent Bird Detection and Repeller System in Rice Field Based on Internet of Things. In Proceedings of the IEEE International Conference on Communication, Networks and Satellite (COMNETSAT), Malang, Indonesia, 23–25 November 2023; pp. 615–621. [Google Scholar] [CrossRef]
  15. Mpouziotas, D.; Karvelis, P.; Stylios, C. Advanced Computer Vision Methods for Tracking Wild Birds from Drone Footage. Drones 2024, 8, 259. [Google Scholar] [CrossRef]
  16. Selvi, S.S.; Pavithraa, S.; Dharini, R.; Chaitra, E. A Deep Learning Approach to Classify Drones and Birds. In Proceedings of the MysuruCon 2022—IEEE 2nd Mysore Sub Section International Conference, Mysuru, India, 16–17 October 2022. [Google Scholar] [CrossRef]
  17. Chen, Y.C.; Chu, J.F.; Hsieh, K.W.; Lin, T.H.; Chang, P.Z.; Tsai, Y.C. Automatic wild bird repellent system that is based on deep-learning-based wild bird detection and integrated with a laser rotation mechanism. Sci. Rep. 2024, 14, 15924. [Google Scholar] [CrossRef]
  18. Phetyawa, S.; Kamyod, C.; Yooyatiwong, T.; Kim, C.G. Application and Challenges of an IoT Bird Repeller System as a result of Bird Behavior. In Proceedings of the International Symposium on Wireless Personal Multimedia Communications (WPMC), Herning, Denmark, 30 October–2 November 2022; pp. 323–327. [Google Scholar] [CrossRef]
  19. Abdulla, F.A.; Yusuf, Q.Y.; Almosawi, S.M.; Sadeq, M.; Baserrah, S. Bird Hazard Mitigation System (BHMS): Review, design, and implementation. In IET Conference Proceedings; The Institution of Engineering and Technology: Stevenage, UK, 2022; Volume 2022, pp. 301–306. [Google Scholar] [CrossRef]
  20. Srividya, K.; Nagaraj, S.; Puviyarasi, B.; Kumar, T.S.; Rufus, A.R.S.; Sreeja, G. Deeplearning Based Bird Deterrent System for Agriculture. In Proceedings of the 2021 4th International Conference on Computing and Communications Technologies (ICCCT), Chennai, India, 16–17 December 2021; pp. 555–559. [Google Scholar] [CrossRef]
  21. Zhang, X.; Mehta, V.; Bolic, M.; Mantegh, I. Hybrid AI-enabled method for UAS and bird detection and classification. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC); IEEE: Toronto, ON, Canada, 2020; pp. 2803–2807. [Google Scholar] [CrossRef]
  22. Mao, X.; Chow, J.K.; Tan, P.S.; Liu, K.-F.; Wu, J.; Su, Z.; Cheong, Y.H.; Ooi, G.L.; Pang, C.C.; Wang, Y.-H. Domain randomization-enhanced deep learning models for bird detection. Sci. Rep. 2021, 11, 639. [Google Scholar] [CrossRef]
  23. Kumar, S.; Kondaveeti, H.K.; Simhadri, C.G.; Reddy, M.Y. Automatic Bird Species Recognition using Audio and Image Data: A Short Review. In Proceedings of the IEEE InC4 2023—2023 IEEE International Conference on Contemporary Computing and Communications, Bangalore, India, 21–22 April 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  24. Kempelis, A.; Romanovs, A.; Patlins, A. Using Computer Vision and Machine Learning Based Methods for Plant Monitoring in Agriculture: A Systematic Literature Review. In Proceedings of the ITMS 2022—2022 63rd International Scientific Conference on Information Technology and Management Science of Riga Technical University, Riga, Latvia, 6–7 October 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  25. Wang, Y.; Zhou, J.; Zhang, C.; Luo, Z.; Han, X.; Ji, Y.; Guan, J. Bird Object Detection: Dataset Construction, Model Performance Evaluation, and Model Lightweighting. Animals 2023, 13, 2924. [Google Scholar] [CrossRef]
  26. Anusha, P.; Manisai, K. Bird Species Classification Using Deep Learning. In Proceedings of the 2022 International Conference on Intelligent Controller and Computing for Smart Power (ICICCSP 2022), Hyderabad, India, 21–23 July 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  27. Jyothi, N.M.; Tabassum, H.; Sharmila, M.; Chilakapati, N.R.; Pavani, A.; Souza, C.D. AI Model for Bird Species Prediction with Detection of Rare, Migratory and Extinction Birds using ELM Boosted by OBS. In Proceedings of the Winter Summit on Smart Computing and Networks (WiSSCoN 2023), Chennai, India, 15–17 March 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  28. Khan, M.U.; Dil, M.; Alam, M.Z.; Orakazi, F.A.; Almasoud, A.M.; Kaleem, Z.; Yuen, C. SafeSpace MFNet: Precise and Efficient MultiFeature Drone Detection Network. arXiv 2022, arXiv:2211.16785. [Google Scholar] [CrossRef]
  29. Ahmed, A.A.; Nyarko, B. Smart-Watcher: An AI-Powered IoT Monitoring System for Small-Medium Scale Premises. In Proceedings of the 2024 International Conference on Computing, Networking and Communications (ICNC 2024), Big Island, HI, USA, 19–22 February 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024; pp. 139–143. [Google Scholar] [CrossRef]
  30. Antariksa, M.D.S.; Husodo, A.Y.; Huwae, R.B.; Nugraha, R.A. Design and Development of Smart Farming System for Monitoring and Bird Pest Control Based on Raspberry Pi 4 with Implementation of YOLOv5 Algorithm. In Proceedings of the ICADEIS 2023—International Conference on Advancement in Data Science, E-Learning and Information Systems: Data, Intelligent Systems, and the Applications for Human Life, Bali, Indonesia, 2–3 August 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  31. Acharya, D.; Saqib, M.; Devine, C.; Untiedt, C.; Little, L.R.; Wang, D.; Tuck, G.N. Using deep learning to automate the detection of bird scaring lines on fishing vessels. Biol. Conserv. 2024, 296, 110713. [Google Scholar] [CrossRef]
  32. Alqaysi, H.; Fedorov, I.; Qureshi, F.Z.; O’nils, M. A temporal boosted yolo-based model for birds detection around wind farms. J. Imaging 2021, 7, 227. [Google Scholar] [CrossRef]
  33. Dorrer, M.G.; Alekhina, A.E. Solving the problem of biodiversity analysis of bird detection and classification in the video stream of camera traps. In Proceedings of the E3S Web of Conferences, Bangkok, Thailand, 9–11 June 2023; EDP Sciences: Les Ulis, France, 2023. [Google Scholar] [CrossRef]
  34. Hentati-Sundberg, J.; Olin, A.B.; Reddy, S.; Berglund, P.A.; Svensson, E.; Reddy, M.; Kasarareni, S.; Carlsen, A.A.; Hanes, M.; Kad, S.; et al. Seabird surveillance: Combining CCTV and artificial intelligence for monitoring and research. Remote Sens. Ecol. Conserv. 2023, 9, 568–581. [Google Scholar] [CrossRef]
  35. Chen, Y.; Liu, Y.; Wang, Z.; Lu, J. Research on Airport Bird Recognition Based on Deep Learning. In Proceedings of the 2022 IEEE 22nd International Conference on Communication Technology (ICCT), Nanjing, China, 11–14 November 2022; pp. 1458–1462. [Google Scholar] [CrossRef]
  36. Brown, R.N.; Brown, D.H. Robotic Laser Scarecrows: A Tool for Controlling Bird Damage in Sweet Corn. Crop Protection 2021, 146, 105652. Available online: https://www.sciencedirect.com/science/article/pii/S0261219421001228 (accessed on 6 May 2025). [CrossRef]
  37. Sun, Z.; Hua, Z.; Li, H.; Zhong, H. Flying Bird Object Detection Algorithm in Surveillance Video Based on Motion Information. 2023. Available online: http://arxiv.org/abs/2301.01917 (accessed on 8 May 2025).
  38. Alzadjail, N.S.H.; Balasubaramainan, S.; Savarimuthu, C.; Rances, E.O. A Deep Learning Framework for Real-Time Bird Detection and Its Implications for Reducing Bird Strike Incidents. Sensors 2024, 24, 5455. [Google Scholar] [CrossRef]
  39. Harini, D.; Sri, K.B.; Durga, M.M.; Brahmaiah, O.V. Crow Detection in Peanut Field Using Raspberry Pi. In Proceedings of the 2023 9th International Conference on Advanced Computing and Communication Systems (ICACCS 2023), Coimbatore, India, 17–18 March 2023; pp. 260–266. [Google Scholar] [CrossRef]
  40. Al-Showarah, S.A.; Al-Qbailat, S.T. Birds Identification System Using Deep Learning. Available online: https://thesai.org/Downloads/Volume12No4/Paper_34-Birds_Identification_System_using_Deep_Learning.pdf (accessed on 25 April 2025).
  41. Song, Q.; Guan, Y.; Guo, X.; Guo, X.; Chen, Y.; Wang, H.; Ge, J.; Wang, T.; Bao, L. Benchmarking wild bird detection in complex forest scenes. Ecol. Inform. 2024, 80, 2. [Google Scholar] [CrossRef]
  42. Samadzadegan, F.; Javan, F.D.; Mahini, F.A.; Gholamshahi, M. Detection and Recognition of Drones Based on a Deep Convolutional Neural Network Using Visible Imagery. Aerospace 2022, 9, 31. [Google Scholar] [CrossRef]
  43. Lin, C.; Wang, J.; Ji, L. An AI-based Wild Animal Detection System and Its Application. Biodivers. Inf. Sci. Stand. 2023, 7, 112456. [Google Scholar] [CrossRef]
  44. Zhao, F.; Wei, R.; Chao, Y.; Shao, S.; Jing, C. Infrared Bird Target Detection Based on Temporal Variation Filtering and a Gaussian Heat-Map Perception Network. Appl. Sci. 2022, 12, 5679. [Google Scholar] [CrossRef]
  45. Alswaitti, M.; Zihao, L.; Alomoush, W.; Alrosan, A.; Alissa, K. Effective classification of birds’ species based on transfer learning. Int. J. Electr. Comput. Eng. (IJECE) 2022, 12, 4172–4184. [Google Scholar] [CrossRef]
  46. Kondaveeti, H.K.; Nithiyasri, P.; Sri, B.S.L.; Jessica, K.H.; Kumar, S.V.S.; Gopi, S.C. Bird Species Recognition using Deep Learning. In Proceedings of the 2023 3rd International Conference on Artificial Intelligence and Signal Processing (AISP 2023), Vijayawada, India, 18–20 March 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  47. Somoju, P.T.; Sateesh, E. Identification of Bird Species Using Deep Learning. 2022. Available online: www.ijrti.org (accessed on 8 May 2012).
  48. Chalmers, C.; Fergus, P.; Wich, S.; Longmore, S.N. Modelling Animal Biodiversity Using Acoustic Monitoring and Deep Learning. arXiv 2021, arXiv:2103.07276. [Google Scholar] [CrossRef]
  49. Segura-Garcia, J.; Sturley, S.; Arevalillo-Herraez, M.; Alcaraz-Calero, J.M.; Felici-Castell, S.; Navarro-Camba, E.A. 5G AI-IoT System for Bird Species Monitoring and Song Classification. Sensors 2024, 24, 3687. [Google Scholar] [CrossRef]
  50. Sanae, F.; Kazutoshi, A.; Norimichi, U. Distant Bird Detection for Safe Drone Flight and Its Dataset. In Proceedings of the 2021 17th International Conference on Machine Vision and Applications (MVA), Aichi, Japan, 25–27 July 2021. [Google Scholar]
  51. Kellenberger, B.; Veen, T.; Folmer, E.; Tuia, D. Deep Learning Enhances the Detection of Breeding Birds in UAV Images. In Proceedings of the EGU General Assembly Conference Abstracts (EGU21), Online, 19–30 April 2021. [Google Scholar] [CrossRef]
  52. Mehta, V.; Dadboud, F.; Bolic, M.; Mantegh, I. A Deep Learning Approach for Drone Detection and Classification Using Radar and Camera Sensor Fusion. In Proceedings of the 2023 IEEE Sensors Applications Symposium (SAS 2023), Ottawa, ON, Canada, 18–20 July 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  53. Zhaoguo, X.; Zhenhua, Z.; Yi, W. Performance Assessment and Optimization of Bird Prevention Devices for Transmission Lines Based on Internet of Things Technology. Appl. Math. Nonlinear Sci. 2024, 9. [Google Scholar] [CrossRef]
  54. Rafa, S.A.; Al-Qfail, Z.M.; Nafea, A.A.; Abd-Hood, S.F.; Al-Ani, M.M.; Alameri, S.A. A Birds Species Detection Utilizing an Effective Hybrid Model. In Proceedings of the 2024 21st International Multi-Conference on Systems, Signals and Devices (SSD 2024), Erbil, Iraq, 22–25 April 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024; pp. 705–710. [Google Scholar] [CrossRef]
  55. Heo, S.; Baumann, N.; Margelisch, C.; Giordano, M.; Magno, M. Low-cost Smart Raven Deterrent System with Tiny Machine Learning for Smart Agriculture. In Proceedings of the Conference Record—IEEE Instrumentation and Measurement Technology Conference, Kuala Lumpur, Malaysia, 22–25 May 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  56. Kellenberger, B.; Veen, T.; Folmer, E.; Tuia, D. 21000 birds in 4.5 h: Efficient large-scale seabird detection with machine learning. Remote Sens. Ecol. Conserv. 2021, 7, 445–460. [Google Scholar] [CrossRef]
  57. Kharisma, I.L.; Insany, G.P.; Firdaus, A.R.; Nasrulloh, D. Overcoming the Impact of Bird Pests on Rice Yields Using Internet of Things Based YOLO Method. Int. J. Eng. Appl. Technol. 2024, 7, 18–32. [Google Scholar] [CrossRef]
  58. Kondaveeti, H.K.; Sanjay, K.S.; Shyam, K.; Aniruth, R.; Gopi, S.C.; Kumar, S.V.S. Transfer Learning for Bird Species Identification. In Proceedings of the ICCSC 2023—2nd International Conference on Computational Systems and Communication, Thiruvananthapuram, India, 3–4 March 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  59. Liang, H.; Zhang, X.; Kong, J.; Zhao, Z.; Ma, K. SMB-YOLOv5: A Lightweight Airport Flying Bird Detection Algorithm Based on Deep Neural Networks. IEEE Access 2024, 12, 84878–84892. [Google Scholar] [CrossRef]
  60. Mahmood, A.B.; Gregori, S.; Runciman, J.; Warbick, J.; Baskar, H.; Badr, M. UAV Based Smart Bird Control Using Convolutional Neural Networks. In Proceedings of the Canadian Conference on Electrical and Computer Engineering, Halifax, NS, Canada, 18–20 September 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 89–94. [Google Scholar] [CrossRef]
  61. Marcoň, P.; Janoušek, J.; Pokorný, J.; Novotný, J.; Hutová, E.V.; Širůčková, A.; Čáp, M.; Lázničková, J.; Kadlec, R.; Raichl, P.; et al. A system using artificial intelligence to detect and scare bird flocks in the protection of ripening fruit. Sensors 2021, 21, 4244. [Google Scholar] [CrossRef]
  62. Ritti, A.; Chandrashekhara, J. Detecting Intended Target Birds and Using Frightened Techniques in Crops to Preserve Yield. Int. J. Innov. Res. Eng. Manag. 2024, 12, 24–27. [Google Scholar] [CrossRef]
  63. Schiano, F.; Natter, D.; Zambrano, D.; Floreano, D. Autonomous Detection and Deterrence of Pigeons on Buildings by Drones. IEEE Access 2022, 10, 1745–1755. [Google Scholar] [CrossRef]
  64. Weinstein, B.G.; Garner, L.; Saccomanno, V.R.; Steinkraus, A.; Ortega, A.; Brush, K.; Yenni, G.; McKellar, A.E.; Converse, R.; Lippitt, C.D.; et al. A general deep learning model for bird detection in high resolution airborne imagery. Ecol. Appl. 2021, 32, e2694. [Google Scholar] [CrossRef]
  65. Zhao, S. Bird Movement Recognition Research Based on YOLOv4 Model. In Proceedings of the 2022 4th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM 2022), Hamburg, Germany, 7–9 October 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 441–444. [Google Scholar] [CrossRef]
  66. Aher, R.; Gawali, P.; Kalbhor, P.; Mali, B.; Kapde, N.V. Birdy: A Bird Detection System using CNN and Transfer Learning. Int. J. Adv. Res. Sci. Commun. Technol. 2023, 3, 602–604. [Google Scholar] [CrossRef]
  67. Neeli, S.; Guruguri, C.S.R.; Kammara, A.R.A.; Annepu, V.; Bagadi, K.; Chirra, V.R.R. Bird Species Detection Using CNN and EfficientNet-B0. In Proceedings of the 2023 International Conference on Next Generation Electronics (NEleX 2023), Vellore, India, 14–16 December 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  68. Upadhyay, M.; Murthy, S.K.; Raj, A.A.B. Intelligent system for real time detection and classification of aerial targets using CNN. In Proceedings of the 5th International Conference on Intelligent Computing and Control Systems (ICICCS 2021), Madurai, India, 6–8 May 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 1676–1681. [Google Scholar] [CrossRef]
  69. Sindhuri, N.S.; Rajani, R. Smart Bird Sanctuary Management Platform using Resnet50. Int. J. Res. Appl. Sci. Eng. Technol. 2023, 11, 4787–4791. [Google Scholar] [CrossRef]
  70. Chaurasia, D.; Patro, B.D.K. Real-time Detection of Birds for Farm Surveillance Using YOLOv7 and SAHI. In Proceedings of the 2023 3rd International Conference on Computing and Information Technology (ICCIT 2023), Tabuk, Saudi Arabia, 13–14 September 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023; pp. 442–450. [Google Scholar] [CrossRef]
  71. Vo, H.-T.; Thien, N.N.; Mui, K.C. Bird Detection and Species Classification: Using YOLOv5 and Deep Transfer Learning Models. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 102. [Google Scholar] [CrossRef]
  72. Mashuk, F.; Samsujjoha; Sattar, A.; Sultana, N. Machine learning approach for bird detection. In Proceedings of the 3rd International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV 2021), Tirunelveli, India, 4–6 February 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 818–822. [Google Scholar] [CrossRef]
  73. Chen, X.; Zhang, Z. Optimization Research of Bird Detection Algorithm Based on YOLO in Deep Learning Environment. Int. J. Image Graph. 2024, 2550059. [Google Scholar] [CrossRef]
  74. Shrestha, R.; Glackin, C.; Wall, J.; Cannings, N. Bird Audio Diarization with Faster R-CNN. In Artificial Neural Networks and Machine Learning—ICANN 2021; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2021; pp. 415–426. [Google Scholar] [CrossRef]
  75. Alsaadi, E.M.T.A.; Alzubaidi, A.M.N. Automated bird detection using SSD-mobile net in images. AIP Conf. Proc. 2024, 3097, 050010. [Google Scholar] [CrossRef]
  76. Dere, K.D.; Aher, P. AM-DRCN: Adaptive Migrating Bird Optimization-based Drift-Enabled Convolutional Neural Network for Threat Detection in Internet of Things. In Proceedings of the 5th International Conference on IoT Based Control Networks and Intelligent Systems (ICICNIS 2024), Bengaluru, India, 17–18 December 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024; pp. 1085–1091. [Google Scholar] [CrossRef]
  77. Pan, S.; Zhao, D.; Zhang, W. CNN-based Multi-model Birdcall Identification on Embedded Devices. In Proceedings of the 5th IEEE International Conference on Smart Internet of Things (SmartIoT 2021), Jeju, Republic of Korea, 13–15 August 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 245–251. [Google Scholar] [CrossRef]
  78. Bota, G.; Manzano-Rubio, R.; Catalán, L.; Gómez-Catasús, J.; Pérez-Granados, C. Hearing to the Unseen: AudioMoth and BirdNET as a Cheap and Easy Method for Monitoring Cryptic Bird Species. Sensors 2023, 23, 7176. [Google Scholar] [CrossRef]
  79. Cinkler, T.; Nagy, K.; Simon, C.; Vida, R.; Rajab, H. Two-Phase Sensor Decision: Machine-Learning for Bird Sound Recognition and Vineyard Protection. IEEE Sens. J. 2022, 22, 11393–11404. [Google Scholar] [CrossRef]
  80. Conde, M.V.; Shubham, K.; Agnihotri, P.; Movva, N.D.; Bessenyei, S. Weakly-Supervised Classification and Detection of Bird Sounds in the Wild. A BirdCLEF 2021 Solution. arXiv 2021, arXiv:2107.04878. [Google Scholar]
  81. Isabato, S.D.; Canonaco, G.; Flikkema, P.G.; Roveri, M.; Alippi, C. Birdsong Detection at the Edge with Deep Learning. In Proceedings of the 2021 IEEE International Conference on Smart Computing (SMARTCOMP 2021), Irvine, CA, USA, 23–27 August 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 9–16. [Google Scholar] [CrossRef]
  82. Durgun, M. An Acoustic Bird Repellent System Leveraging Edge Computing and Machine Learning Technologies. In Proceedings of the 2023 Innovations in Intelligent Systems and Applications Conference (ASYU), Sivas, Turkey, 11–13 October 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  83. Amenyedzi, D.K.; Kazeneza, M.; Mwaisekwa, I.I.; Nzanywayingoma, F.; Nsengiyumva, P.; Bamurigire, P.; Ndashimye, E.; Vodacek, A. System Design for a Prototype Acoustic Network to Deter Avian Pests in Agriculture Fields. Agriculture 2025, 15, 10. [Google Scholar] [CrossRef]
  84. Huang, Z.; Tousnakhoff, A.; Kozyr, P.; Rehausen, R.; Bießmann, F.; Lachlan, R.; Adjih, C.; Baccelli, E. TinyChirp: Bird Song Recognition Using TinyML Models on Low-Power Wireless Acoustic Sensors. arXiv 2024, arXiv:2407.21453. [Google Scholar]
  85. Dawasari, H.J.A.; Bilal, M.; Moinuddin, M.; Arshad, K.; Assaleh, K. DeepVision: Enhanced Drone Detection and Recognition in Visible Imagery through Deep Learning Networks. Sensors 2023, 23, 8711. [Google Scholar] [CrossRef]
  86. Morales, G.; Vargas, V.; Espejo, D.; Poblete, V.; Tomasevic, J.A.; Otondo, F.; Navedo, J.G. Method for passive acoustic monitoring of bird communities using UMAP and a deep neural network. Ecol. Inform. 2022, 72, 101909. [Google Scholar] [CrossRef]
  87. Ntalampiras, S.; Potamitis, I. Acoustic detection of unknown bird species and individuals. CAAI Trans. Intell. Technol. 2021, 6, 291–300. [Google Scholar] [CrossRef]
  88. Sakhri, A.; Hadji, O.; Bouarrouguen, C.; Maimour, M.; Kouadria, N.; Benyahia, A.; Rondeau, E.; Doghmane, N.; Harize, S. Audio-Visual Low Power System for Endangered Waterbirds Monitoring. IFAC-PapersOnLine 2022, 55, 25–30. [Google Scholar] [CrossRef]
  89. Toenies, M.; Rich, L.N. Advancing bird survey efforts through novel recorder technology and automated species identification. Calif. Fish Wildl. 2021, 107, 56–70. [Google Scholar] [CrossRef]
  90. Tsompos, C.; Pavlidis, V.F.; Siozios, K. Designing a Lightweight Convolutional Neural Network for Bird Audio Detection. In Proceedings of the 2022 Panhellenic Conference on Electronics and Telecommunications (PACET 2022), Tripolis, Greece, 2–3 December 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  91. Verma, R.; Kumar, S. AviEar: An IoT-based Low Power Solution for Acoustic Monitoring of Avian Species. IEEE Sens. J. 2024, 24, 42088–42102. [Google Scholar] [CrossRef]
  92. Elias, A.R.; Golubovic, N.; Krintz, C.; Wolski, R. Where’s the bear?—Automating wildlife image processing using IoT and edge cloud systems. In Proceedings of the 2017 IEEE/ACM 2nd International Conference on Internet-of-Things Design and Implementation (IoTDI 2017), Pittsburgh, PA, USA, 18–21 April 2017; Association for Computing Machinery, Inc.: New York, NY, USA, 2017; pp. 247–258. [Google Scholar] [CrossRef]
  93. Biglari, A.; Tang, W. A Review of Embedded Machine Learning Based on Hardware, Application, and Sensing Scheme. Sensors 2023, 23, 2131. [Google Scholar] [CrossRef] [PubMed]
  94. Höchst, J.; Bellafkir, H.; Lampe, P.; Vogelbacher, M.; Mühling, M.; Schneider, D.; Lindner, K.; Rösner, S.; Schabo, D.G.; Farwig, N.; et al. Bird@Edge: Bird Species Recognition at the Edge. In Networked Systems; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2022; pp. 69–86. [Google Scholar] [CrossRef]
  95. Teterja, D.; Garcia-Rodriguez, J.; Azorin-Lopez, J.; Sebastian-Gonzalez, E.; Walt, R.E.V.; Booysen, M.J. An Image Mosaicing-Based Method for Bird Identification on Edge Computing Devices. In 18th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2023); Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2023; Volume 750, pp. 216–225. [Google Scholar] [CrossRef]
  96. Mahmud, R.; Toosi, A.N. Con-Pi: A Distributed Container-based Edge and Fog Computing Framework. IEEE Internet Things J. 2021, 9, 4125–4138. [Google Scholar] [CrossRef]
  97. Tavares, J.C.C.; Ruiz, L.B. Towards a Novel Edge to Cloud IoMT Application for Wildlife Monitoring using Edge Computing. In Proceedings of the 7th IEEE World Forum on Internet of Things (WF-IoT 2021), New Orleans, LA, USA, 14–31 June 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 130–135. [Google Scholar] [CrossRef]
  98. Manna, A.; Upasani, N.; Jadhav, S.; Mane, R.; Chaudhari, R.; Chatre, V. Bird Image Classification Using Convolutional Neural Network Transfer Learning Architectures. Available online: www.ijacsa.thesai.org (accessed on 25 April 2025).
  99. Das, N.; Padhy, N.; Dey, N.; Mukherjee, A.; Maiti, A. Building of an edge enabled drone network ecosystem for bird species identification. Ecol. Inform. 2022, 68, 101540. [Google Scholar] [CrossRef]
  100. Micaelo, E.B.; Lourenço, L.G.P.S.; Gaspar, P.D.; Caldeira, J.M.L.P.; Soares, V.N.G.J. Bird deterrent solutions for crop protection: Approaches, challenges, and opportunities. Agriculture 2023, 13, 774. [Google Scholar] [CrossRef]
  101. Arowolo, M.O.; Fayose, F.T.; Ade-Omowaye, J.A.; Adekunle, A.A.; Akindele, S.O. Design and Development of an Energy-efficient Audio-based Repellent System for Rice Fields. Int. J. Emerg. Technol. Adv. Eng. 2022, 12, 82–94. [Google Scholar] [CrossRef]
Figure 1. Study selection steps.
Figure 1. Study selection steps.
Iot 06 00046 g001
Figure 2. ML performance comparison.
Figure 2. ML performance comparison.
Iot 06 00046 g002
Table 1. Inclusion and Exclusion Criteria.
Table 1. Inclusion and Exclusion Criteria.
Inclusion CriteriaExclusion Criteria
Published after 2020Published before 2020
Describes machine learning models for bird detection/repellenceFocuses on traditional (non-ML) bird control methods
Involves IoT-based solutions (e.g., edge computing, smart sensors)Lacks technical details on ML model architecture
Uses image/audio/video-based detection techniquesUsed other detection techniques
Provides open or well-documented datasetsUses proprietary or inaccessible datasets
Table 2. Data Collection and Processing.
Table 2. Data Collection and Processing.
Data Collection MethodDataset TypePreprocessing Techniques
Image capture [3,29,30]CustomAnnotation, resizing, OpenCV processing, frame subtraction, contour extraction
Video surveillance [31,32,33,34,35,36,37,38]CustomFrame extraction, annotation, background subtraction, noise removal, image scaling, data augmentation, classification
Video surveillance [39]COCOFrame extraction, data augmentation
Image collection [40,41]CustomGrayscale conversion, feature extraction, motion blur, contrast adjustment
Image collection [28,42]Public datasetsContrast enhancement, annotation
Image collection [43,44]Multiple datasetsFrame difference, morphology, resizing, standardization
Image collection [45,46]Kaggle datasetDuplicate removal, cropping, resizing
Image collection [47]CUB-200-2011Grayscale conversion, histogram analysis
Camera traps [48,49]CustomAnnotation, conversion to TFRecords
Drone-mounted camera [50]CustomPatch division, data augmentation
Unmanned Aerial Vehicle imagery [51]CustomAnnotation, orthomosaic creation, orthomosaic division
Radar and camera [52]CustomAnnotation, data fusion, feature extraction
Webcam feeds [6]CustomNo mention found
Image and sensor data [53]No mention foundFeature extraction, data fusion
Table 3. Machine Learning Model Summary.
Table 3. Machine Learning Model Summary.
Model ArchitecturePerformance MetricsKey Findings
Mask R-CNN [29]Accuracy: 96.3%, Prediction time: 1.61 sHigh accuracy for various object classes, including birds (95.6%)
Mask R-CNN with ResNet-101-FPN [17]Precision: 0.86 with low recallHigh precision
Faster R-CNN with ResNet50 [22,31]Detection precision: 0.87Effective for BSL detection, performance varies by vessel and conditions
VGG-19 with various classifiers [40]ANN Accuracy: 70.99%, Precision: 0.718, Recall: 0.71, F1 score: 0.708ANN outperformed other classifiers, high training time noted
YOLOv4 variants [16,32]mAP: up to 94%, Recall: 96%, F1 score: 94%Ensemble model showed best performance, challenges with small birds
Faster R-CNN with ResNet101 [48]Accuracy: 96.71%, Sensitivity: 88.79%High accuracy and sensitivity, challenges with smaller objects
YOLOv5 [30]Processing speed: 0.78–0.8 FPSLimited processing speed, detection range varies by environment
YOLO variants [33]Precision: up to 0.99, Recall: up to 0.99YOLOv3-tiny with comparative modules performed best
CenterNet [50]mAP: 66.72–72.13Performance varied with data augmentation, 6 FPS on GPU
SSD with MobileNet [39]mAP: 78%, FPS: 89Improved performance with data augmentation
Custom CNN [55]Detection Accuracy: 77%, Average Precision: 87%Effective for raven detection, low inference latency
YOLOv5-medium-960 [34]Precision: 0.91, Recall: 0.79, F1-score: 0.85High performance, real-time inference possible
ResNet-18 based CNN [56]Precision: 90% at 90% recall (Royal Terns)Varied performance across species, challenges with similar species
YOLOv3-320 [57]100% accuracy in testsPerfect detection in controlled tests, real-world performance not specified
MultiFeatureNet variants [28]Precision up to 99.8% for birdsHigh performance, especially MFNet-L for overall detection
MobileNetV2 [58]Test Accuracy: 95%, Real-time Accuracy: 80%High accuracy, outperformed other tested architectures
SMB-YOLOv5 [59]Precision: 82.6%, Recall: 71.1%, mAP@50: 77.1%Real-time detection at 24 FPS
CNN (unspecified) [60]Accuracy: Over 98%High accuracy, ResNet outperformed AlexNet and VGG
CNN (unspecified) [61]Precision: 83.4–100% (varies by class)High precision for bird and flock detection
YOLOv5, YOLOv7, RNN [52]Accuracy: 98% (drones), 94% (birds)High accuracy, challenges with false positives for birds
Faster R-CNN, SSD variants [6]mAP: 92.3% (Faster R-CNN with ResNet152)Faster R-CNN outperformed SSD models
YOLOv4-tiny [55]mAP: 92.04%, FPS: 40Good balance of accuracy and speed
EfficientNet-B3Accuracy: 94.5%, F1-score: 0.91Robust classification performance, computationally efficient
YOLOv8 [53]Precision: 94.8%, Recall: 89.5%Improved real-time detection and accuracy
YOLO, ResNet100 [62]YOLOv3 mAP: 57.9% (COCO test-dev)Specific bird detection performance not reported
YOLOv4 [42]Overall accuracy: 83%, mAP: 84%Good performance, challenges with crowded backgrounds
Faster R-CNN [63]mAP: 69.84% (overall)Effective for pigeon detection, some false negatives
Fourier descriptors, YOLO [3]FD: 83% accuracy, YOLO: 97% accuracyYOLO more accurate but slower on Raspberry Pi
DCNN (unspecified) [47]Overall accuracy: 80–90%Competitive performance compared to other approaches
Various (Cascade RCNN, YOLO, etc.) [41]mAP: 0.704 (Cascade RCNN with Swin-T)Cascade RCNN performed best, challenges with small birds
ConvLSTM-PAN, LW-USN [37]AP50: 0.7089 for FBOD-BMIOutperformed YOLOv5l, challenges with higher IOU thresholds
FBOD-Net [38]AP: 76.2%, 59.87 FPSOutperformed several other models, good speed-accuracy balance
RetinaNet with ResNet-50 [64]Recall: >65%, Precision: >50% (general model)Improved performance with fine-tuning on local data
YOLOv4 [65]Accuracy: 99.13%, 12 FPSOutperformed Faster R-CNN and CNN in accuracy and speed
Table 4. Acoustic-Based Detection.
Table 4. Acoustic-Based Detection.
Study FocusHardware UsedApproachDetection Performance
Evaluation of BirdNET for detecting two bird species [78]AudioMothBirdNET (CNN-based)Precision: 92.6% (Coal Tit), 87.8% (Short-toed Treecreeper)
Bird sound Classification [48]No mention foundMultilayer Perceptron (MLP)Accuracy: 74%
Vineyard protection from birds [79]Raspberry Pi 3B, microphoneTwo-phase: SVM and CNNAccuracy: 96%
BirdCLEF 2021 challenge [80]No mention foundCNN-based ensembleF1 score: 0.6780
Birdsong detection on IoT devices [81]STM32 Nucleo H743ZI2 MCUToucaNet and BarbNet (CNN-based)AUC: 0.925 (ToucaNet), 0.853 (BarbNet)
Acoustic bird repellent system [82]Arduino Nano 33 BLE, microphoneDenseNet201 (CNN)Accuracy: 92.54%
Avian pest deterrence [83]Arduino Nano 33 BLE Sense, XIAO ESP32S3Conv1D neural networkAccuracy: 92.99%
Bird song recognition on IoT devices [84]ARM Cortex-M microcontrollersVarious CNN and Transformer modelsAccuracy: >90% for best models
Avian diversity monitoring [85]Autonomous Recording Units (ARUs)BirdNET (ResNet-based)mAP: 0.791 for single-species recordings
Monitoring Eurasian bittern [78]AudioMothBirdNET and Kaleidoscope ProAccuracy: 93.7% (BirdNET), 98.4% (Kaleidoscope Pro)
Passive acoustic monitoring of bird communities [86]SM4 Wildlife Acoustics ARUsCNN (ResNet50)mAP: 0.97
Detecting novel bird species and individuals [87]No mention foundVariational Autoencoder (VAE)FPI: 1.6%, FNI: 0.9% (species detection)
Birdcall identification on embedded devices [77]Jetson NanoCNN-based multi-model networkAccuracy: 84.9%
Endangered birds monitoring [88]ARM Cortex M3 micro-controllerDynamic Time Warping (DTW)No mention found
Bird species monitoring and song classification [49]5G IoT-based system, ESP32-S3 MCUsVarious CNNs (EfficientNet, MobileNet)Accuracy: >70% for best models
Evaluation of acoustic recorders and BirdNET [89]AudioMoth, Swift Recorder, SM3BAT, SM MiniBirdNET (not specified)Accuracy: 96%
Bird audio detection [90]No mention foundLightweight CNNAccuracy: 86.42%
Acoustic monitoring of avian species [91]AviEar (IoT-based wireless sensor node)No clear mention foundPrecision: 99.6%, Recall: 95%
Table 5. Comparison of Communication Technologies.
Table 5. Comparison of Communication Technologies.
TechnologyData Transfer SpeedPower ConsumptionRangeCostSuitability for Media (Image/Video)Stability in Remote Areas
Wi-FiHighHighLimitedMediumHighMedium
LoRaLowVery LowVery LongLowPoorHigh
Cellular (4G/5G)Very HighHighVery LongHighExcellentHigh
ZigbeeModerateVery LowShort to MediumLowPoorMedium
Table 6. Automated Bird Deterrent Mechanisms.
Table 6. Automated Bird Deterrent Mechanisms.
Integration MethodsRepellence Method Effectiveness RatingImplementation ComplexityEnvironmental Impact
Sound-based [30,101]ModerateLowLow to Moderate
Sound-based [55]High (77% detection accuracy)ModerateLow
Unmanned Aerial Vehicle with ultrasonic [60]High (>98% accuracy)HighLow to Moderate
AI-triggered servo [57]High (100% detection in tests)ModerateLow
Drone-based visual [63]High (significant reduction in stay time)HighLow
Sound-based [62]No mention foundModerateLow
Lasers [17]ModerateModerateLow to Moderate
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ooko, S.O.; Ndashimye, E.; Twahirwa, E.; Busogi, M. IoT and Machine Learning for Smart Bird Monitoring and Repellence: Techniques, Challenges, and Opportunities. IoT 2025, 6, 46. https://doi.org/10.3390/iot6030046

AMA Style

Ooko SO, Ndashimye E, Twahirwa E, Busogi M. IoT and Machine Learning for Smart Bird Monitoring and Repellence: Techniques, Challenges, and Opportunities. IoT. 2025; 6(3):46. https://doi.org/10.3390/iot6030046

Chicago/Turabian Style

Ooko, Samson O., Emmanuel Ndashimye, Evariste Twahirwa, and Moise Busogi. 2025. "IoT and Machine Learning for Smart Bird Monitoring and Repellence: Techniques, Challenges, and Opportunities" IoT 6, no. 3: 46. https://doi.org/10.3390/iot6030046

APA Style

Ooko, S. O., Ndashimye, E., Twahirwa, E., & Busogi, M. (2025). IoT and Machine Learning for Smart Bird Monitoring and Repellence: Techniques, Challenges, and Opportunities. IoT, 6(3), 46. https://doi.org/10.3390/iot6030046

Article Metrics

Back to TopTop