Proposed Smart Monitoring System for the Detection of Bee Swarming
Abstract
:1. Introduction
2. Related Work on Beehive-Condition-Monitoring Products
3. Proposed Monitoring System
3.1. Beehive-Monitoring Node
- The camera module component. The camera module component is responsible for the acquisition of still images inside the beehive. It is placed on a plastic frame covered entirely with a smooth plastic surface to avoid being built on or waxed by bees. The camera used is a fish-eye lens of a 180–200° view angle, a 5MPixel camera with LEDs included (brightness controlled by the MCU), and adjustable focus distance connected directly to the ARM using the MIPI CSI/2 interface using a 15 pin FFC cable. The camera module can take ultra-wide HD images of 2592 × 1944 resolution and achieve frame rates of 2 frames/s for single-core ARM, 8 frames/s for quad-core ARM, and 23 frames/s for octa-core ARM devices, at its maximum resolution potential
- The microprocessor control unit, responsible for storing camera snapshots and uploading them to the cloud (for Version 1 end node devices) or responsible for taking camera snapshots and implementing the deep-learning detection algorithm (for Version 2 end node devices)
- The network transponder device, which can be either a UART-connected WiFi transponder (for Version 1 nodes) or an ARM microprocessor, including an SPI connected LoRaWAN transponder (for Version 2 nodes)
- The power component, which includes a 20 W/12 V PV panel connected directly to a 12 V-9 Ah lead-acid SLA/AGM battery. The battery is placed under the PV panel on top of the beehive and feeds the ARM MCU unit using a 12-5 V/2 A buck converter. The battery used is a deep depletion one since the system might, due to its small battery capacity, be fully discharged, especially at night or on prolonged cloudy days
3.2. Beehive Concentrator
3.3. Beehive Quality Resource Management System
3.4. Beehive End Node Application
4. Deep-Learning System Training and Proposed Detection Process
- Step 1—Initial data acquisition and data cleansing: The initial imagery dataset acquired by the beehive-monitoring module is manually analyzed and filtered to eliminate blurred images or images with a low resolution and light intensity. The photos in this experimentation taken from the camera module are set to a minimum acquisition of 0.5 Mpx in size of 800 × 600 px 300 dpi (67.7 × 50.8 mm) compressed in the JPEG format using a compression ratio Q = 70 (22,91) of 200–250 KB each. That is because the authors wanted to experiment with the smallest possible size of image transmission (due to the per GB network provider costs of image transmissions for Version 1 devices or to minimize processing time overheads for Version 2 devices). Similarly, the trained CNN and algorithms used are the most processing-light for portable devices, using a minimum trained image size input of 640 ×640 px (lightly distorted at the image height) and using cubic interpolation.The trained Convolutional Neural Network (CNN) is used to solve the problem of swarming by counting the bee concentration above the bee frames and inside the beehive lid. The detection categories that the authors’ classifier uses are:Class 0: no bees detected;Class 1: a limited number of bees scattered on the frame or the lid (less than 10);Class 2: a small number of bees (less than or equal to 20);Class 3: initial swarm concentration and a medium number of bees concentrated (more than 20 and less than or equal to 50);Class 4: swarming incident (high number of bees) (more than 50).For each class, the number of detected bees was set as a class identifier (the class identifier boundaries can be arbitrarily set accordingly to the detection service configuration file). Therefore, the selected initial data-set may consist of at least 1000 images per detection class, a total of 5000 images used for training the CNN;
- Step 2—Image transformation and data annotation: The number of collected images per class used for training was annotated by hand using the LabelImg tool [48]. Other commonly used annotation tools are Labelbox [49], ImgAnnotation [50], and Computer Vision Annotation Tool [51], all of which provide an XML annotated output.Image clearness and resolution are equally important in the case of initially having different image dimensions. Regarding photo clearness, the method used is as follows. A bilateral filter smooths all images using a degree of smoothing sigma = 0.5–0.8 and a small 7 × 7 kernel. Afterward, all photos must be scaled to particular and fixed dimensions to be inserted into the training network. Scaling is performed either using a cubic interpolation process, or a super-resolution EDSR process [52]. The preparation of the images is based on the dimensions required as the input by the selected training algorithm, which matches with the underlying extent of the initial CNN layer. The image transformation processes were implemented using OpenCV [53] and were also part of the 2nd stage of the detection process (detection service) before their input into the CNN engine (see Figure 3);
- Step 3—Training process: The preparation of the training process is based on using pre-trained Convolution Neural Network (CNN) models [54,55], TensorFlow [56] (Version 1), and the use of all available CPU and GPU system resources. To achieve training parallel execution speed up, the use of a GPU is necessary, as well as the installation of the CUDA toolkit such that the training process utilizes the GPU resources, according to the TensorFlow requirements [57].The CNN model’s design includes selecting one of the existing pre-trained TensorFlow models, where our swarming classifier will be included as the final classification step. Selected core models for TensorFlow used for training our swarming model and their capabilities are presented in Table 2. Once the Step 2 annotation process is complete and the pre-trained CNN model is selected, the images are randomly divided into two sets. The training set consisted of 80% of the annotated images, and the testing set contained the remaining 20%. The validation set was also used by randomly taking 20% of the training set;
- Step 4—Detection service process: This process is performed by a detection application installed as a service that loads the CNN inference graph in memory and processes arbitrary images received via HTTP put requests from the node Version 1 device. The HTTP put method requires that the requested URI be updated or created to be enclosed in the put message body. Thus, if a resource exists at that URI, the message body should be considered as a new modified version of that resource. If the put request is received, the service initiates the detection process and, from the detection JSON output for that resource, creates a new resource XML response record. Due to the asynchronous nature of the swarming service, the request is also recorded into the BeeQ RMS database to be accessible by the BeeQ RMS web panel and mobile phone application. Moreover, the JSON output response, when generated, is also pushed to the Firebase service to be sent as push notifications to the BeeQ RMS mobile phone application [45,58]. Figure 3 analytically illustrates the detection process steps for both Version 1 and Version 2 end nodes.Step 1 is the threshold max-contour image selection process issued only by Version 2 devices as part of the sequential frames’ motion detection process instantiated periodically. Upon motion detection, photo frames that include the activity notification contours for Version 2 devices or uploaded frames from Version 1 devices are transformed through a bilateral filtering transformation with sigma_space and sigma_color parameters equal to 0.75 and a pixel neighborhood of 5 px. Upon bilateral filter smoothing, the scaling process initiates to normalize images to the input dimensions of the CNN. For down-scaling or minimum dimension up-scaling (up to 100 px), cubic interpolation is used, while for large up-scales, the OpenCV super-resolution process is instantiated (Enhanced Deep Residual Networks for Single Image Super-Resolution) [52] using the end node device. Upon CNN image normalization, the photos are fed to the convolutional neural network classifier, which detects the number of bee contours and reports it using XML output image reports, as presented below:<detection><bees>Number of bees</bees><varroa><detected>True/False</detected><num>Number of bees</num></varroa><queen><detected>True/False</detected></queen><hornets><detected>True/False</detected><num>Number of hornets</num></hornets><notes>Instructions for dealing with diseases.</notes></detection>Apart from bee counting information, the XML image reports also include the information of detected bees carrying the Varroa mite. Such detection can be performed using two RGB color masks over the detected bee contours. This functionality is still under validation and therefore set as future work and exploitation of the CNN bee counting classifier. Nevertheless, this capability was included in the web interface but not thoroughly tested. Similarly, the automated queen detection functionality (now performed using the check status form report in the Bee RMS web application) was also included as a capability of the web detection interface. The Bee RMS classifier is still under data collection and training since the preliminary trained models used a limited number of images. The algorithmic process includes a new detection queen bee class and the HSV processing of the bee queen’s color to estimate its age.Upon generation of the XML image report, the swarming service loads the report and stores it in the BeeQ RMS database to be accessible by the web BeeQ RMS interface and transforms it to a JSON object to be sent to the Firebase service [45,58]. The BeeQ RMS Android mobile application can receive such push notifications and appropriately notify the beekeeper.
5. Experimental Scenarios
5.1. Scenario I: End Node Version 1 Detection Systems’ Performance Tests
5.2. Scenario II: End Node Version 2 Detection Systems’ Performance Tests
5.3. Scenario III: CNN Algorithms’ and Models’ Accuracy
5.4. Scenario IV: System Validation for Swarming
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Detection of Images Using the Faster-RCNN and SSD Algorithms with the Jetson Nano
Appendix B. BeeQ RMS Software
Appendix C. BeeQ RMS End Node Devices (Versions 1 and 2)
References
- Abbasi, M.; Yaghmaee, M.H.; Rahnama, F. Internet of Things in agriculture: A survey. In Proceedings of the 2019 3rd International Conference on Internet of Things and Applications (IoT), Heraklion, Greece, 2–4 May 2019; Volume 1, pp. 1–12. [Google Scholar] [CrossRef]
- Farooq, M.S.; Riaz, S.; Abid, A.; Abid, K.; Naeem, M.A. A Survey on the Role of IoT in Agriculture for the Implementation of Smart Farming. IEEE Access 2019, 7, 156237–156271. [Google Scholar] [CrossRef]
- Farooq, M.S.; Riaz, S.; Abid, A.; Umer, T.; Zikria, Y.B. Role of IoT Technology in Agriculture: A Systematic Literature Review. Electronics 2020, 9, 319. [Google Scholar] [CrossRef] [Green Version]
- Zinas, N.; Kontogiannis, S.; Kokkonis, G.; Valsamidis, S.; Kazanidis, I. Proposed Open Source Architecture for Long Range Monitoring. The Case Study of Cattle Tracking at Pogoniani. In Proceedings of the 21st Pan-Hellenic Conference on Informatics; ACM: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
- Kontogiannis, S. An Internet of Things-Based Low-Power Integrated Beekeeping Safety and Conditions Monitoring System. Inventions 2019, 4, 52. [Google Scholar] [CrossRef] [Green Version]
- Mekala, M.S.; Viswanathan, P. A Survey: Smart agriculture IoT with cloud computing. In Proceedings of the 2017 International conference on Microelectronic Devices, Circuits and Systems (ICMDCS), Vellore, India, 10–12 August 2017; Volume 1, pp. 1–7. [Google Scholar] [CrossRef]
- Zabasta, A.; Kunicina, N.; Kondratjevs, K.; Ribickis, L. IoT Approach Application for Development of Autonomous Beekeeping System. In Proceedings of the 2019 International Conference in Engineering Applications (ICEA), Azores, Portugal, 8–11 July 2019; Volume 1, pp. 1–6. [Google Scholar] [CrossRef]
- Flores, J.M.; Gamiz, V.; Jimenez-Marin, A.; Flores-Cortes, A.; Gil-Lebrero, S.; Garrido, J.J.; Hernando, M.D. Impact of Varroa destructor and associated pathologies on the colony collapse disorder affecting honey bees. Res. Vet. Sci. 2021, 135, 85–95. [Google Scholar] [CrossRef] [PubMed]
- Dineva, K.; Atanasova, T. ICT-Based Beekeeping Using IoT and Machine Learning. In Distributed Computer and Communication Networks; Springer International Publishing: Cham, Switzerland, 2018; pp. 132–143. [Google Scholar]
- Olate-Olave, V.R.; Verde, M.; Vallejos, L.; Perez Raymonda, L.; Cortese, M.C.; Doorn, M. Bee Health and Productivity in Apis mellifera, a Consequence of Multiple Factors. Vet. Sci. 2021, 8, 76. [Google Scholar] [CrossRef] [PubMed]
- Braga, A.R.; Gomes, D.G.; Rogers, R.; Hassler, E.E.; Freitas, B.M.; Cazier, J.A. A method for mining combined data from in-hive sensors, weather and apiary inspections to forecast the health status of honey bee colonies. Comput. Electr. Agric. 2020, 169, 105161. [Google Scholar] [CrossRef]
- Jarimi, H.; Tapia-Brito, E.; Riffat, S. A Review on Thermoregulation Techniques in Honey Bees’ (Apis Mellifera) Beehive Microclimate and Its Similarities to the Heating and Cooling Management in Buildings. Future Cities Environ. 2020, 6, 7. [Google Scholar] [CrossRef]
- Peters, J.M.; Peleg, O.; Mahadevan, L. Collective ventilation in honeybee nests. Future Cities Environ. 2019, 16, 20180561. [Google Scholar] [CrossRef] [Green Version]
- Zacepins, A.; Kviesis, A.; Stalidzans, E.; Liepniece, M.; Meitalovs, J. Remote detection of the swarming of honey bee colonies by single-point temperature monitoring. Biosyst. Eng. 2016, 148, 76–80. [Google Scholar] [CrossRef]
- Zacepins, A.; Kviesis, A.; Pecka, A.; Osadcuks, V. Development of Internet of Things concept for Precision Beekeeping. In Proceedings of the 18th International Carpathian Control Conference (ICCC), Sinaia, Romania, 28–31 May 2017; Volume 1, pp. 23–27. [Google Scholar] [CrossRef]
- Gil-Lebrero, S.; Quiles-Latorre, F.J.; Ortiz-Lopez, M.; Sanchez-Ruiz, V.; Gamiz-Lopez, V.; Luna-Rodriguez, J.J. Honey Bee Colonies Remote Monitoring System. Sensors 2017, 17, 55. [Google Scholar] [CrossRef] [Green Version]
- Bellos, C.V.; Fyraridis, A.; Stergios, G.S.; Stefanou, K.A.; Kontogiannis, S. A Quality and disease control system for beekeeping. In Proceedings of the 2021 6th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Preveza, Greece, 24–26 September 2021; pp. 1–4. [Google Scholar] [CrossRef]
- Ntawuzumunsi, E.; Kumaran, S.; Sibomana, L. Self-Powered Smart Beehive Monitoring and Control System (SBMaCS). Sensors 2021, 21, 3522. [Google Scholar] [CrossRef]
- Prost, P.J.; Medori, P. Apiculture, 6th ed.; Intercept Ltd.: Oxford, UK, 1994. [Google Scholar]
- Nikolaidis, I.N. Beekeeping Modern Methods of Intensive Exploitation, 11th ed.; Stamoulis Publications: Athens, Greece, 2005. (In Greek) [Google Scholar]
- Clement, H. Le Traite Rustica de L’apiculture; Psichalos Publications: Athens, Greece, 2017. [Google Scholar]
- He, W.; Zhang, S.; Hu, Z.; Zhang, J.; Liu, X.; Yu, C.; Yu, H. Field experimental study on a novel beehive integrated with solar thermal/photovoltaic system. Sol. Energy 2020, 201, 682–692. [Google Scholar] [CrossRef]
- Kady, C.; Chedid, A.M.; Kortbawi, I.; Yaacoub, C.; Akl, A.; Daclin, N.; Trousset, F.; Pfister, F.; Zacharewicz, G. IoT-Driven Workflows for Risk Management and Control of Beehives. Diversity 2021, 13, 296. [Google Scholar] [CrossRef]
- Terenzi, A.; Cecchi, S.; Spinsante, S. On the Importance of the Sound Emitted by Honey Bee Hives. Vet. Sci. 2020, 7, 168. [Google Scholar] [CrossRef] [PubMed]
- Ferrari, S.; Silva, M.; Guarino, M.; Berckmans, D. Monitoring of swarming sounds in beehives for early detection of the swarming period. Comput. Electron. Agric. 2008, 64, 72–77. [Google Scholar] [CrossRef]
- Nolasco, I.; Terenzi, A.; Cecchi, S.; Orcioni, S.; Bear, H.L.; Benetos, E. Audio-based Identification of Beehive States. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; Volume 1, pp. 8256–8260. [Google Scholar] [CrossRef] [Green Version]
- Zgank, A. Bee Swarm Activity Acoustic Classification for an IoT-Based Farm Service. Sensors 2020, 20, 21. [Google Scholar] [CrossRef] [Green Version]
- Nolasco, I.; Benetos, E. To bee or not to bee: Investigating machine learning approaches for beehive sound recognition. CoRR 2018. Available online: http://xxx.lanl.gov/abs/1811.06016 (accessed on 14 April 2019).
- Liew, L.H.; Lee, B.Y.; Chan, M. Cell detection for bee comb images using Circular Hough Transformation. In Proceedings of the 2010 International Conference on Science and Social Research (CSSR 2010), Kuala Lumpur, Malaysia, 5–7 December 2010; pp. 191–195. [Google Scholar] [CrossRef]
- Baptiste, M.; Ekszterowicz, G.; Laurent, J.; Rival, M.; Pfister, F. Bee Hive Traffic Monitoring by Tracking Bee Flight Paths. 2018. Available online: https://hal.archives-ouvertes.fr/hal-01940300/document (accessed on 5 September 2019).
- Simic, M.; Starcevic, V.; Kezić, N.; Babic, Z. Simple and Low-Cost Electronic System for Honey Bee Counting. In Proceedings of the 28th International Electrotechnical and Computer Science Conference, Ambato, Ecuador, 23–24 September 2019. [Google Scholar]
- Bee-Shop Security Systems: Surveillance Camera for Bees. 2015. Available online: http://www.bee-shop.gr (accessed on 14 June 2020).
- EyeSon Hives Honey Bee Health Monitor. | Keltronix. 2018. Available online: https://www.keltronixinc.com/ (accessed on 7 June 2018).
- Theodoros Belogiannis. Zygi Beekeeping Scales with Monitoring Camera Module. 2018. Available online: https://zygi.gr/en (accessed on 10 March 2021).
- Arnia Remote Hive Monitoring System. Better Knowledge for Bee Health. 2017. Available online: https://arnia.co.uk (accessed on 4 June 2018).
- Hive-Tech 2 Crowd Monitoring System for Your Hives. 2019. Available online: https://www.3bee.com/en/crowd/ (accessed on 16 March 2019).
- Hivemind System to Monitor Your Hives to Improve Honey Production. 2017. Available online: https://hivemind.nz/for/honey/ (accessed on 10 March 2021).
- Hudson, T. Easy Bee Counter. 2018. Available online: https://www.instructables.com/Easy-Bee-Counter/ (accessed on 8 September 2020).
- Hudson, T. Honey Bee Counter II. 2020. Available online: https://www.instructables.com/Honey-Bee-Counter-II/ (accessed on 8 September 2020).
- Gomez, K.; Riggio, R.; Rasheed, T.; Granelli, F. Analysing the energy consumption behaviour of WiFi networks. In Proceedings of the 2011 IEEE Online Conference on Green Communications, Online Conference, Piscataway, NJ, USA, 26–29 September 2011; Volume 1, pp. 98–104. [Google Scholar] [CrossRef]
- Nurgaliyev, M.; Saymbetov, A.; Yashchyshyn, Y.; Kuttybay, N.; Tukymbekov, D. Prediction of energy consumption for LoRa based wireless sensors network. Wirel. Netw. 2020, 26, 3507–3520. [Google Scholar] [CrossRef]
- Lavric, A.; Valentin, P. Performance Evaluation of LoRaWAN Communication Scalability in Large-Scale Wireless Sensor Networks. Wirel. Commun. Mob. Comput. 2018, 2018, 6730719. [Google Scholar] [CrossRef] [Green Version]
- Van den Abeele, F.; Haxhibeqiri, J.; Moerman, I.; Hoebeke, J. Scalability Analysis of Large-Scale LoRaWAN Networks in ns-3. IEEE Internet Things J. 2017, 4, 2186–2198. [Google Scholar] [CrossRef] [Green Version]
- Haxhibeqiri, J.; De Poorter, E.; Moerman, I.; Hoebeke, J. A Survey of LoRaWAN for IoT: From Technology to Application. Sensors 2018, 18, 3995. [Google Scholar] [CrossRef] [Green Version]
- Mokar, M.A.; Fageeri, S.O.; Fattoh, S.E. Using Firebase Cloud Messaging to Control Mobile Applications. In Proceedings of the International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE), Khartoum, Sudan, 21–23 September 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Wang, W.; Yang, Y.; Wang, X.; Wang, W.; Li, J. Development of convolutional neural network and its application in image classification: A survey. Opt. Eng. 2019, 58, 1–19. [Google Scholar] [CrossRef] [Green Version]
- Bharati, P.; Pramanik, A. Deep Learning Techniques-R-CNN to Mask R-CNN: A Survey. In Computational Intelligence in Pattern Recognition; Springer: Singapore, 2020; pp. 657–668. [Google Scholar]
- Tzudalin, D. LabelImg Is a Graphical Image Annotation Tool and Label Object Bounding Boxes in Images. 2016. Available online: https://github.com/tzutalin/labelImg (accessed on 20 September 2019).
- Labelbox. Labelbox: The Leading Training Data Platform for Data Labeling. Available online: https://labelbox.com (accessed on 2 June 2021).
- Image Annotation Tool. Available online: https://github.com/alexklaeser/imgAnnotation (accessed on 2 June 2021).
- Computer Vision Annotation Tool (CVAT). 2021. Available online: https://github.com/openvinotoolkit/cvat (accessed on 2 June 2021).
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; Volume 1, pp. 1132–1140. [Google Scholar] [CrossRef] [Green Version]
- Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools 2000, 25, 120–123. [Google Scholar]
- GitHub-Tensorflow/Models: Models and Examples Built with TensorFlow 1. Available online: https://github.com/tensorflow/models/tree/r1.12.0 (accessed on 15 September 2018).
- GitHub-Tensorflow/Models: Models and Examples Built with TensorFlow 2. 2021. Available online: https://github.com/tensorflow/models (accessed on 12 November 2020).
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: https://www.tensorflow.org/ (accessed on 20 September 2018).
- TensorFlow GPU Support. 2017. Available online: https://www.tensorflow.org/install/gpu?hl=el (accessed on 15 September 2018).
- Moroney, L. The Firebase Realtime Database. In The Definite Guide to Firebase; Apress: Berkeley, CA, USA, 2017; pp. 51–71. [Google Scholar] [CrossRef]
- Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed and accuracy trade-offs for modern convolutional object detectors. arXiv 2017, arXiv:1611.10012. [Google Scholar]
System/Device | Swarm-Monitoring Capabilities | ||
---|---|---|---|
Productivity Monitoring | Indirect Population Monitoring Using Cameras | Indirect Population Monitoring Using Sound or Other Sensors | |
Bee-Shop camera kit [32] | ✓ | ✓ | |
HiveMind system [37] | ✓ | ✓ | |
Hive-Tech system [36] | ✓ | ✓ | |
Arnia system [35] | ✓ | ✓ | |
EyeSon Hive system [33] | ✓ | ✓ | |
Zygi [34] | ✓ | ✓ |
CNN Mode [54,55] | CNN Models Capabilities | |||
---|---|---|---|---|
Input Images Size (wxh) (px) | Model Classes | Models Mean Detection Time per (ms) | Model Accuracy mAP | |
SSD - MobileNet v1 | 300 × 300 | 90 | 30 | 21 |
SSD - MobileNet v2 | 300 × 300 | 90 | 31 | 22 |
SSD - Inception v2 | 300 × 300 | 90 | 42 | 24 |
Faster-RCNN - Inception v2 | 600–1024 × 600–1024 | 90 | 58 | 28 |
CNN Models | Initial Loss Value | Final Loss Value | Total Training Time |
---|---|---|---|
SSD - MobileNet v1 | 11.90 | 0.7025 | 35 h 47 min |
SSD - Inception v2 | 4.095 | 0.8837 | 43 h 15 min |
Faster-RCNN - Inception v2 | 1.438 | 0.0047 | 14 h 27 min |
CNN Models | Load Time (s) | Testing Detection Time (s) | Detection Average Time per Image (s) | Total Time (s) | Memory Usage (MB) |
---|---|---|---|---|---|
SSD - MobileNet v1 | 0.291 | 1.340 | 0.105 | 1.631 | 45.262 |
SSD - Inception v2 | 0.432 | 1.523 | 0.156 | 1.955 | 106.889 |
Faster-RCNN - Inception v2 | 0.368 | 5.358 | 1.079 | 5.726 | 104.327 |
CNN Models | Load Time (s) | Testing Detection Time (s) | Detection Average Time per Image (s) | Total Time (s) | Memory Usage (MB) |
---|---|---|---|---|---|
SSD - MobileNet v1 | 0.309 | 1.335 | 0.087 | 1.644 | 45.262 |
SSD - Inception v2 | 0.431 | 1.361 | 0.109 | 1.792 | 106.889 |
Faster-RCNN - Inception v2 | 0.386 | 2.249 | 0.278 | 2.635 | 104.327 |
Load Time (s) | Mean CNN Time (s) | Mean ROI Time (s) | Total Time (s) | MeM Usage (MB) | S.F. mAP | |
---|---|---|---|---|---|---|
CNN Models | ARM single-core ――――- | ARM single-core ――――- | ARM single-core ――――- | ARM single-core ――――- | ARM single-core ――――- | ARM single-core ――――- |
ARM quad-core ――――- | ARM quad-core ――――- | ARM quad-core ――――- | ARM quad-core ――――- | ARM quad-core ――――- | ARM quad-core ――――- | |
ARM quad-core + GPU | ARM quad-core + GPU | ARM quad-core + GPU | ARM quad-core + GPU | ARM quad-core + GPU | ARM quad-cor + GPU | |
SSD - MobileNet v1 | 10.97 ――――- | 29.297 ――――- | 2.955 ――――- | 40.267 ――――- | 75.924 ――――- | 0.01 ――――- |
10.597 ――――- | 22.561 ――――- | 2.457 ――――- | 33.158 ――――- | 75.924 ――――- | 0.012 ――――- | |
7.916 | 33.524 | 6.606 | 41.44 | 81.02 | 0.01 | |
SSD - Inception v2 | 15.381 ――――- | 34.884 ――――- | 6.614 ――――- | 50.265 ――――- | 174.449 ――――- | 0.008 ――――- |
15.318 ――――- | 36.072 ――――- | 6.886 ――――- | 51.39 ――――- | 174.449 ――――- | 0.0079 ――――- | |
10.614 | 61.106 | 13.416 | 71.72 | 181.08 | 0.0056 | |
Faster-RCNN - Inception v2 | 14.389 ――――- | 184.373 ――――- | 41.826 ――――- | 198.762 ――――- | 166.351 ――――- | 0.003 ――――- |
14.243 ――――- | 111.333 ――――- | 24.222 ――――- | 125.576 ――――- | 166.351 ――――- | 0.0058 ――――- | |
9.957 | 69.968 | 15.5 | 79.926 | 172.729 | 0.009 |
CNN Models | Mean Detection Accuracy (MDA) | CNN Model mAP | x86-64 Single-Core CPU SF | ARM Single-Core CPU SF |
---|---|---|---|---|
SSD - MobileNet v1 | 0.418 | 0.4223 | 0.009 | 0.01 |
SSD - Inception v2 | 0.13 | 0.4083 | 0.001 | 0.002 |
Faster-RCNN - Inception v2 | 0.703 | 0.7308 | 0.006 | 0.003 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Voudiotis, G.; Kontogiannis, S.; Pikridas , C. Proposed Smart Monitoring System for the Detection of Bee Swarming. Inventions 2021, 6, 87. https://doi.org/10.3390/inventions6040087
Voudiotis G, Kontogiannis S, Pikridas C. Proposed Smart Monitoring System for the Detection of Bee Swarming. Inventions. 2021; 6(4):87. https://doi.org/10.3390/inventions6040087
Chicago/Turabian StyleVoudiotis, George, Sotirios Kontogiannis, and Christos Pikridas . 2021. "Proposed Smart Monitoring System for the Detection of Bee Swarming" Inventions 6, no. 4: 87. https://doi.org/10.3390/inventions6040087
APA StyleVoudiotis, G., Kontogiannis, S., & Pikridas , C. (2021). Proposed Smart Monitoring System for the Detection of Bee Swarming. Inventions, 6(4), 87. https://doi.org/10.3390/inventions6040087