Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = camera-based system
Page = 2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5675 KB  
Technical Note
Cotton Gin Stand Machine-Vision Inspection and Removal System for Plastic Contamination: Hand Intrusion Sensor Design
by Mathew G. Pelletier, John D. Wanjura, Jon R. Wakefield, Greg A. Holt and Neha Kothari
AgriEngineering 2024, 6(1), 1-19; https://doi.org/10.3390/agriengineering6010001 - 22 Dec 2023
Cited by 2 | Viewed by 2264
Abstract
Plastic contamination in cotton lint poses significant challenges to the U.S. cotton industry, with plastic wrap from John Deere round module harvesters being a primary contaminant. Despite efforts to manually remove this plastic during module unwrapping, some inevitably enters the cotton gin’s processing [...] Read more.
Plastic contamination in cotton lint poses significant challenges to the U.S. cotton industry, with plastic wrap from John Deere round module harvesters being a primary contaminant. Despite efforts to manually remove this plastic during module unwrapping, some inevitably enters the cotton gin’s processing system. To address this, a machine-vision detection and removal system has been developed. This system uses inexpensive color cameras to identify plastic on the gin stand feeder apron, triggering a mechanism that expels the plastic from the cotton stream. However, the system, composed of 30–50 Linux-based ARM computers, requires substantial effort for calibration and tuning and presents a technological barrier for typical cotton gin workers. This research aims to transition the system to a more user-friendly, plug-and-play model by implementing an auto-calibration function. The proposed function dynamically tracks cotton colors while excluding plastic images that could hinder performance. A critical component of this auto-calibration algorithm is the hand intrusion detector, or “HID”, which is discussed in this paper. In the normal operation of a cotton gin, the gin personnel periodically have to clear the machine, which entails running a stick or their arm/hand under the detection cameras. This results in the system capturing a false positive, which interferes with the ability of auto-calibration algorithms to function correctly. Hence, there is a critical need for an HID to remove these false positives from the record. The anticipated benefits of the auto-calibration function include reduced setup and maintenance overhead, less reliance on skilled personnel, and enhanced adoption of the plastic removal system within the cotton ginning industry. Full article
Show Figures

Figure 1

13 pages, 1690 KB  
Article
Using Natural Head Movements to Continually Calibrate EOG Signals
by Jason R. Nezvadovitz and Hrishikesh M. Rao
J. Eye Mov. Res. 2022, 15(5), 1-13; https://doi.org/10.16910/jemr.15.5.6 - 30 Dec 2022
Cited by 2 | Viewed by 894
Abstract
Electrooculography (EOG) is the measurement of eye movements using surface electrodes adhered around the eye. EOG systems can be designed to have an unobtrusive form-factor that is ideal for eye tracking in free-living over long durations, but the relationship between voltage and gaze [...] Read more.
Electrooculography (EOG) is the measurement of eye movements using surface electrodes adhered around the eye. EOG systems can be designed to have an unobtrusive form-factor that is ideal for eye tracking in free-living over long durations, but the relationship between voltage and gaze direction requires frequent re-calibration as the skin-electrode impedance and retinal adaptation vary over time. Here we propose a method for automatically calibrating the EOG-gaze relationship by fusing EOG signals with gyroscopic measurements of head movement whenever the vestibulo-ocular reflex (VOR) is active. The fusion is executed as recursive inference on a hidden Markov model that accounts for all rotational degrees-of-freedom and uncertainties simultaneously. This enables continual calibration using natural eye and head movements while minimizing the impact of sensor noise. No external devices like monitors or cameras are needed. On average, our method’s gaze estimates deviate by 3.54° from those of an industry-standard desktop video-based eye tracker. Such discrepancy is on par with the latest mobile video eye trackers. Future work is focused on automatically detecting moments of VOR in free-living. Full article
Show Figures

Figure 1

46 pages, 2047 KB  
Review
Indoor Navigation—User Requirements, State-of-the-Art and Developments for Smartphone Localization
by Günther Retscher
Geomatics 2023, 3(1), 1-46; https://doi.org/10.3390/geomatics3010001 - 27 Dec 2022
Cited by 26 | Viewed by 9628
Abstract
A variety of positioning systems have emerged for indoor localization which are based on several system strategies, location methods, and technologies while using different signals, such as radio frequency (RF) signals. Demands regarding positioning in terms of performance, robustness, availability and positioning accuracies [...] Read more.
A variety of positioning systems have emerged for indoor localization which are based on several system strategies, location methods, and technologies while using different signals, such as radio frequency (RF) signals. Demands regarding positioning in terms of performance, robustness, availability and positioning accuracies are increasing. The overall goal of indoor positioning is to provide GNSS-like functionality in places where GNSS signals are not available. Analysis of the state-of-the-art indicates that although a lot of work is being done to combine both the outdoor and indoor positioning systems, there are still many problems and challenges to be solved. Most people moving on the city streets and interiors of public facilities have a smartphone, and most professionals working in public facilities or construction sites are equipped with tablets or smartphone devices. If users already have the necessary equipment, they should be provided with further functionalities that will help them in day-to-day life and work. In this review study, user requirements and the state-of-the-art in system development for smartphone localization are discussed. In particular, localization with current and upcoming ‘signals-of-opportunity’ (SoP) for use in mobile devices is the main focus of this paper. Full article
(This article belongs to the Special Issue New Advances in Indoor Navigation)
Show Figures

Figure 1

14 pages, 2161 KB  
Article
A Single-Camera Gaze Tracking System Under Natural Light
by Feng Xiao, Dandan Zheng, Kejie Huang, Yue Qiu and Haibin Shen
J. Eye Mov. Res. 2018, 11(4), 1-14; https://doi.org/10.16910/jemr.11.4.5 - 20 Oct 2018
Cited by 4 | Viewed by 447
Abstract
Gaze tracking is a human-computer interaction technology, and it has been widely studied in the academic and industrial fields. However, constrained by the performance of the specific sensors and algorithms, it has not been popularized for everyone. This paper proposes a single-camera gaze [...] Read more.
Gaze tracking is a human-computer interaction technology, and it has been widely studied in the academic and industrial fields. However, constrained by the performance of the specific sensors and algorithms, it has not been popularized for everyone. This paper proposes a single-camera gaze tracking system under natural light to enable its versatility. The iris center and anchor point are the most crucial factors for the accuracy of the system. The accurate iris center is detected by the simple active contour snakuscule, which is initialized by the prior knowledge of eye anatomical dimensions. After that, a novel anchor point is computed by the stable facial landmarks. Next, second-order mapping functions use the eye vectors and the head pose to estimate the points of regard. Finally, the gaze errors are improved by implementing a weight coefficient on the points of regard of the left and right eyes. The feature position of the iris center achieves an accuracy of 98.87% on the GI4E database when the normalized error is lower than 0.05. The accuracy of the gaze tracking method is superior to the-state-of-the-art appearance-based and feature- based methods on the EYEDIAP database. Full article
Show Figures

Figure 1

9 pages, 1938 KB  
Article
Ways of Improving the Precision of Eye Tracking Data: Controlling the Influence of Dirt and Dust on Pupil Detection
by Wolfgang Fuhl, Thomas C. Kübler, Dennis Hospach, Oliver Bringmann, Wolfgang Rosenstiel and Enkelejda Kasneci
J. Eye Mov. Res. 2017, 10(3), 1-9; https://doi.org/10.16910/jemr.10.3.1 - 12 May 2017
Cited by 7 | Viewed by 384
Abstract
Eye-tracking technology has to date been primarily employed in research. With recent advances in affordable video-based devices, the implementation of gaze-aware smartphones, and marketable driver monitoring systems, a considerable step towards pervasive eye-tracking has been made. However, several new challenges arise with the [...] Read more.
Eye-tracking technology has to date been primarily employed in research. With recent advances in affordable video-based devices, the implementation of gaze-aware smartphones, and marketable driver monitoring systems, a considerable step towards pervasive eye-tracking has been made. However, several new challenges arise with the usage of eye-tracking in the wild and will need to be tackled to increase the acceptance of this technology. The main challenge is still related to the usage of eye-tracking together with eyeglasses, which in combination with reflections for changing illumination conditions will make a subject "untrackable". If we really want to bring the technology to the consumer, we cannot simply exclude 30% of the population as potential users only because they wear eyeglasses, nor can we make them clean their glasses and the device regularly. Instead, the pupil detection algorithms need to be made robust to potential sources of noise. We hypothesize that the amount of dust and dirt on the eyeglasses and the eye-tracker camera has a significant influence on the performance of currently available pupil detection algorithms. Therefore, in this work, we present a systematic study of the effect of dust and dirt on the pupil detection by simulating various quantities of dirt and dust on eyeglasses. Our results show (1) an overall high robustness to dust in an offfocus layer. (2) the vulnerability of edge-based methods to even small in-focus dust particles. (3) a trade-off between tolerated particle size and particle amount, where a small number of rather large particles showed only a minor performance impact. Full article
Show Figures

Figure 1

22 pages, 2829 KB  
Article
Vision-Based Cooperative Pose Estimation for Localization in Multi-Robot Systems Equipped with RGB-D Cameras
by Xiaoqin Wang, Y. Ahmet Şekercioğlu and Tom Drummond
Robotics 2015, 4(1), 1-22; https://doi.org/10.3390/robotics4010001 - 26 Dec 2014
Cited by 17 | Viewed by 9701
Abstract
We present a new vision based cooperative pose estimation scheme for systems of mobile robots equipped with RGB-D cameras. We first model a multi-robot system as an edge-weighted graph. Then, based on this model, and by using the real-time color and depth data, [...] Read more.
We present a new vision based cooperative pose estimation scheme for systems of mobile robots equipped with RGB-D cameras. We first model a multi-robot system as an edge-weighted graph. Then, based on this model, and by using the real-time color and depth data, the robots with shared field-of-views estimate their relative poses in pairwise. The system does not need the existence of a single common view shared by all robots, and it works in 3D scenes without any specific calibration pattern or landmark. The proposed scheme distributes working loads evenly in the system, hence it is scalable and the computing power of the participating robots is efficiently used. The performance and robustness were analyzed both on synthetic and experimental data in different environments over a range of system configurations with varying number of robots and poses. Full article
(This article belongs to the Special Issue Coordination of Robotic Systems)
Show Figures

Figure 1

Back to TopTop