Next Article in Journal
Fully Automated Segmentation of Bladder Sac and Measurement of Detrusor Wall Thickness from Transabdominal Ultrasound Images
Previous Article in Journal
Automatic Guidance Method for Laser Tracker Based on Rotary-Laser Scanning Angle Measurement
Previous Article in Special Issue
Low-Cost Automated Vectors and Modular Environmental Sensors for Plant Phenotyping
Open AccessArticle

Toward Joint Acquisition-Annotation of Images with Egocentric Devices for a Lower-Cost Machine Learning Application to Apple Detection

1
Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), Université d’Angers, 62 Avenue Notre Dame du Lac, 49035 Angers, France
2
UMR 1345 Institut de Recherche en Horticulture et Semences (IRHS), INRAe, 42 Rue Georges Morel, 49071 Beaucouzé, France
3
Department of Data Science, école D’ingénieur Informatique et Environnement (ESAIP), 49124 Angers, France
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(15), 4173; https://doi.org/10.3390/s20154173
Received: 9 May 2020 / Revised: 1 July 2020 / Accepted: 24 July 2020 / Published: 27 July 2020
(This article belongs to the Special Issue Low-Cost Sensors and Vectors for Plant Phenotyping)
Since most computer vision approaches are now driven by machine learning, the current bottleneck is the annotation of images. This time-consuming task is usually performed manually after the acquisition of images. In this article, we assess the value of various egocentric vision approaches in regard to performing joint acquisition and automatic image annotation rather than the conventional two-step process of acquisition followed by manual annotation. This approach is illustrated with apple detection in challenging field conditions. We demonstrate the possibility of high performance in automatic apple segmentation (Dice 0.85), apple counting (88 percent of probability of good detection, and 0.09 true-negative rate), and apple localization (a shift error of fewer than 3 pixels) with eye-tracking systems. This is obtained by simply applying the areas of interest captured by the egocentric devices to standard, non-supervised image segmentation. We especially stress the importance in terms of time of using such eye-tracking devices on head-mounted systems to jointly perform image acquisition and automatic annotation. A gain of time of over 10-fold by comparison with classical image acquisition followed by manual image annotation is demonstrated. View Full-Text
Keywords: egocentric vision; image annotation; apple detection; eye-tracking egocentric vision; image annotation; apple detection; eye-tracking
Show Figures

Figure 1

MDPI and ACS Style

Samiei, S.; Rasti, P.; Richard, P.; Galopin, G.; Rousseau, D. Toward Joint Acquisition-Annotation of Images with Egocentric Devices for a Lower-Cost Machine Learning Application to Apple Detection. Sensors 2020, 20, 4173. https://doi.org/10.3390/s20154173

AMA Style

Samiei S, Rasti P, Richard P, Galopin G, Rousseau D. Toward Joint Acquisition-Annotation of Images with Egocentric Devices for a Lower-Cost Machine Learning Application to Apple Detection. Sensors. 2020; 20(15):4173. https://doi.org/10.3390/s20154173

Chicago/Turabian Style

Samiei, Salma; Rasti, Pejman; Richard, Paul; Galopin, Gilles; Rousseau, David. 2020. "Toward Joint Acquisition-Annotation of Images with Egocentric Devices for a Lower-Cost Machine Learning Application to Apple Detection" Sensors 20, no. 15: 4173. https://doi.org/10.3390/s20154173

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop