Open AccessArticle
Feasibility of Using the Optical Sensing Techniques for Early Detection of Huanglongbing in Citrus Seedlings
Robotics 2017, 6(2), 11; doi:10.3390/robotics6020011 -
Abstract
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant
[...] Read more.
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant impact on citrus yield and quality in Florida. Unfortunately, no cure has been reported for HLB. Starch accumulates in HLB infected leaf chloroplasts, which causes the mottled blotchy green pattern. Starch rotates the polarization plane of light. A polarized imaging technique was used to detect the polarization-rotation caused by the hyper-accumulation of starch as a pre-symptomatic indication of HLB in young seedlings. Citrus seedlings were grown in a room with controlled conditions and exposed to intensive feeding by CLas-positive psyllids for eight weeks. A quantitative polymerase chain reaction was employed to confirm the HLB status of samples. Two datasets were acquired; the first created one month after the exposer to psyllids and the second two months later. The results showed that, with relatively unsophisticated imaging equipment, four levels of HLB infections could be detected with accuracies of 72%–81%. As expected, increasing the time interval between psyllid exposure and imaging increased the development of symptoms and, accordingly, improved the detection accuracy. Full article
Figures

Figure 1

Open AccessArticle
Binaural Range Finding from Synthetic Aperture Computation as the Head is Turned
Robotics 2017, 6(2), 10; doi:10.3390/robotics6020010 -
Abstract
A solution to binaural direction finding described in Tamsett (Robotics2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one
[...] Read more.
A solution to binaural direction finding described in Tamsett (Robotics2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one and the method extended for SAC as a function of range for estimating range to an acoustic source. An instantaneous angle λ (lambda) between the auditory axis and direction to an acoustic source locates the source on a small circle of colatitude (lambda circle) of a sphere symmetric about the auditory axis. As the head is turned, data over successive instantaneous lambda circles are integrated in a virtual field of audition from which the direction to an acoustic source can be inferred. Multiple sets of lambda circles generated as a function of range yield an optimal range at which the circles intersect to best focus at a point in a virtual three-dimensional field of audition, providing an estimate of range. A proof of concept is demonstrated using simulated experimental data. The method enables a binaural robot to estimate not only direction but also range to an acoustic source from sufficiently accurate measurements of arrival time/level differences at the antennae. Full article
Figures

Figure 1

Open AccessArticle
Visual Place Recognition for Autonomous Mobile Robots
Robotics 2017, 6(2), 9; doi:10.3390/robotics6020009 -
Abstract
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We
[...] Read more.
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We compare different approaches for visual place recognition: holistic methods (visual compass and warping), signature-based methods (using Fourier coefficients or feature descriptors (able for binary-appearance loop-closure evaluation, ABLE)), and feature-based methods (fast appearance-based mapping, FabMap). As new contributions we investigate whether warping, a successful visual homing method, is suitable for place recognition. In addition, we extend the well-known visual compass to use multiple scale planes, a concept also employed by warping. To achieve tolerance against changing illumination conditions, we examine the NSAD distance measure (normalized sum of absolute differences) on edge-filtered images. To reduce the impact of illumination changes on the distance values, we suggest to compute ratios of image distances to normalize these values to a common range. We test all methods on multiple indoor databases, as well as a small outdoor database, using images with constant or changing illumination conditions. ROC analysis (receiver-operator characteristics) and the metric distance between best-matching image pairs are used as evaluation measures. Most methods perform well under constant illumination conditions, but fail under changing illumination. The visual compass using the NSAD measure on edge-filtered images with multiple scale planes, while being slower than signature methods, performs best in the latter case. Full article
Figures

Figure 1

Open AccessArticle
Robot-Assisted Crowd Evacuation under Emergency Situations: A Survey
Robotics 2017, 6(2), 8; doi:10.3390/robotics6020008 -
Abstract
In the case of emergency situations, robotic systems can play a key role and save human lives in recovery and evacuation operations. To realize such a potential, we have to address many scientific and technical challenges encountered during robotic search and rescue missions.
[...] Read more.
In the case of emergency situations, robotic systems can play a key role and save human lives in recovery and evacuation operations. To realize such a potential, we have to address many scientific and technical challenges encountered during robotic search and rescue missions. This paper reviews current state-of-the-art robotic technologies that have been deployed in the simulation of crowd evacuation, including both macroscopic and microscopic models used in simulating a crowd. Existing work on crowd simulation is analyzed and the robots used in crowd evacuation are introduced. Finally, the paper demonstrates how autonomous robots could be effectively deployed in disaster evacuation, as well as search and rescue missions. Full article
Figures

Figure 1

Open AccessArticle
An Optimal and Energy Efficient Multi-Sensor Collision-Free Path Planning Algorithm for a Mobile Robot in Dynamic Environments
Robotics 2017, 6(2), 7; doi:10.3390/robotics6020007 -
Abstract
There has been a remarkable growth in many different real-time systems in the area of autonomous mobile robots. This paper focuses on the collaboration of efficient multi-sensor systems to create new optimal motion planning for mobile robots. A proposed algorithm is used based
[...] Read more.
There has been a remarkable growth in many different real-time systems in the area of autonomous mobile robots. This paper focuses on the collaboration of efficient multi-sensor systems to create new optimal motion planning for mobile robots. A proposed algorithm is used based on a new model to produce the shortest and most energy-efficient path from a given initial point to a goal point. The distance and time traveled, in addition to the consumed energy, have an asymptotic complexity of O(nlogn), where n is the number of obstacles. Real time experiments are performed to demonstrate the accuracy and energy efficiency of the proposed motion planning algorithm. Full article
Figures

Figure 1

Open AccessArticle
A New Combined Vision Technique for Micro Aerial Vehicle Pose Estimation
Robotics 2017, 6(2), 6; doi:10.3390/robotics6020006 -
Abstract
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are
[...] Read more.
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are considered simultaneously by the particle filter framework to improve the accuracy of visual positioning. The framework, which is driven by an onboard inertial module, takes the positioning results from the visual system as measurements and updates the vehicle state. Moreover, experimental testing and data analysis have been carried out to verify the proposed algorithm, including multi-camera configuration, design and assembly of MAV systems, and the marker detection and matching between different views. Our results indicated that the combined vision technique is very attractive for high-performance MAV pose estimation. Full article
Figures

Figure 1

Open AccessArticle
Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing
Robotics 2017, 6(1), 5; doi:10.3390/robotics6010005 -
Abstract
Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation
[...] Read more.
Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object. Full article
Figures

Figure 1

Open AccessArticle
Robot-Assisted Therapy for Learning and Social Interaction of Children with Autism Spectrum Disorder
Robotics 2017, 6(1), 4; doi:10.3390/robotics6010004 -
Abstract
This paper puts forward the potential for designing a parrot-inspired robot and an indirect teaching technique, the adapted model-rival method (AMRM), to help improve learning and social interaction abilities of children with autism spectrum disorder. The AMRM was formulated by adapting two popular
[...] Read more.
This paper puts forward the potential for designing a parrot-inspired robot and an indirect teaching technique, the adapted model-rival method (AMRM), to help improve learning and social interaction abilities of children with autism spectrum disorder. The AMRM was formulated by adapting two popular conventional approaches, namely, model-rival method and label-training procedure. In our validation trials, we used a semi-autonomous parrot-inspired robot, called KiliRo, to simulate a set of autonomous behaviors. A proposed robot-assisted therapy using AMRM was pilot tested with nine children with autism spectrum disorder for five consecutive days in a clinical setting. We analyzed the facial expressions of children when they interacted with KiliRo using an automated emotion recognition and classification system, Oxford emotion API (Application Programming Interface). Results provided some indication that the children with autism spectrum disorder appeared attracted and happy to interact with the parrot-inspired robot. Short qualitative interviews with the children’s parents, the pediatrician, and the child psychologist who participated in this pilot study, also acknowledged that the proposed parrot-inspired robot and the AMRM may have some merit in aiding in improving learning and social interaction abilities of children with autism spectrum disorder. Full article
Figures

Figure 1

Open AccessArticle
Synthetic Aperture Computation as the Head is Turned in Binaural Direction Finding
Robotics 2017, 6(1), 3; doi:10.3390/robotics6010003 -
Abstract
Binaural systems measure instantaneous time/level differences between acoustic signals received at the ears to determine angles λ between the auditory axis and directions to acoustic sources. An angle λ locates a source on a small circle of colatitude (a lamda circle) on a
[...] Read more.
Binaural systems measure instantaneous time/level differences between acoustic signals received at the ears to determine angles λ between the auditory axis and directions to acoustic sources. An angle λ locates a source on a small circle of colatitude (a lamda circle) on a sphere symmetric about the auditory axis. As the head is turned while listening to a sound, acoustic energy over successive instantaneous lamda circles is integrated in a virtual/subconscious field of audition. The directions in azimuth and elevation to maxima in integrated acoustic energy, or to points of intersection of lamda circles, are the directions to acoustic sources. This process in a robotic system, or in nature in a neural implementation equivalent to it, delivers its solutions to the aurally informed worldview. The process is analogous to migration applied to seismic profiler data, and to that in synthetic aperture radar/sonar systems. A slanting auditory axis, e.g., possessed by species of owl, leads to the auditory axis sweeping the surface of a cone as the head is turned about a single axis. Thus, the plane in which the auditory axis turns continuously changes, enabling robustly unambiguous directions to acoustic sources to be determined. Full article
Figures

Figure 1

Open AccessArticle
Experimental and Simulation-Based Investigation of Polycentric Motion of an Inherent Compliant Pneumatic Bending Actuator with Skewed Rotary Elastic Chambers
Robotics 2017, 6(1), 2; doi:10.3390/robotics6010002 -
Abstract
To offer a functionality that could not be found in traditional rigid robots, compliant actuators are in development worldwide for a variety of applications and especially for human–robot interaction. Pneumatic bending actuators are a special kind of such actuators. Due to the absence
[...] Read more.
To offer a functionality that could not be found in traditional rigid robots, compliant actuators are in development worldwide for a variety of applications and especially for human–robot interaction. Pneumatic bending actuators are a special kind of such actuators. Due to the absence of fixed mechanical axes and their soft behavior, these actuators generally possess a polycentric motion ability. This can be very useful to provide an implicit self-alignment to human joint axes in exoskeleton-like rehabilitation devices. As a possible realization, a novel bending actuator (BA) was developed using patented pneumatic skewed rotary elastic chambers (sREC). To analyze the actuator self-alignment properties, knowledge about the motion of this bending actuator type, the so-called skewed rotary elastic chambers bending actuator (sRECBA), is of high interest and this paper presents experimental and simulation-based kinematic investigations. First, to describe actuator motion, the finite helical axes (FHA) of basic actuator elements are determined using a three-dimensional (3D) camera system. Afterwards, a simplified two-dimensional (2D) kinematic simulation model based on a four-bar linkage was developed and the motion was compared to the experimental data by calculating the instantaneous center of rotation (ICR). The equivalent kinematic model of the sRECBA was realized using a series of four-bar linkages and the resulting ICR was analyzed in simulation. Finally, the FHA of the sRECBA were determined and analyzed for three different specific motions. The results show that the actuator’s FHA adapt to different motions performed and it can be assumed that implicit self-alignment to the polycentric motion of the human joint axis will be provided. Full article
Figures

Figure 1

Open AccessEditorial
Acknowledgement to Reviewers of Robotics in 2016
Robotics 2017, 6(1), 1; doi:10.3390/robotics6010001 -
Abstract The editors of Robotics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...] Full article
Open AccessArticle
Complete Coverage Path Planning for a Multi-UAV Response System in Post-Earthquake Assessment
Robotics 2016, 5(4), 26; doi:10.3390/robotics5040026 -
Abstract
This paper presents a post-earthquake response system for a rapid damage assessment. In this system, multiple Unmanned Aerial Vehicles (UAVs) are deployed to collect the images from the earthquake site and create a response map for extracting useful information. It is an extension
[...] Read more.
This paper presents a post-earthquake response system for a rapid damage assessment. In this system, multiple Unmanned Aerial Vehicles (UAVs) are deployed to collect the images from the earthquake site and create a response map for extracting useful information. It is an extension of well-known coverage path problem (CPP) that is based on the grid pattern map decomposition. In addition to some linear strengthening techniques, two mathematic formulations, 4-index and 5-index models, are proposed in the approach and coded in GAMS (Cplex solver). They are tested on a number of problems and the results show that the 5-index model outperforms the 4-index model. Moreover, the proposed system could be significantly improved by the solver-generated cuts, additional constraints, and the variable branching priority extensions. Full article
Figures

Figure 1

Open AccessArticle
Improving Robot Mobility by Combining Downward-Looking and Frontal Cameras
Robotics 2016, 5(4), 25; doi:10.3390/robotics5040025 -
Abstract
This paper presents a novel attempt to combine a downward-looking camera and a forward-looking camera for terrain classification in the field of off-road mobile robots. The first camera is employed to identify the terrain beneath the robot. This information is then used to
[...] Read more.
This paper presents a novel attempt to combine a downward-looking camera and a forward-looking camera for terrain classification in the field of off-road mobile robots. The first camera is employed to identify the terrain beneath the robot. This information is then used to improve the classification of the forthcoming terrain acquired from the frontal camera. This research also shows the usefulness of the Gist descriptor for terrain classification purposes. Physical experiments conducted in different terrains (quasi-planar terrains) and different lighting conditions, confirm the satisfactory performance of this approach in comparison with a simple color-based classifier based only on frontal images. Our proposal substantially reduces the misclassification rate of the color-based classifier (∼10% versus ∼20%). Full article
Figures

Figure 1

Open AccessArticle
A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms
Robotics 2016, 5(4), 24; doi:10.3390/robotics5040024 -
Abstract
Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective
[...] Read more.
Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community. Full article
Figures

Figure 1

Open AccessArticle
Auto-Calibration Methods of Kinematic Parameters and Magnetometer Offset for the Localization of a Tracked Mobile Robot
Robotics 2016, 5(4), 23; doi:10.3390/robotics5040023 -
Abstract
This paper describes an automatic calibration procedure adopted to improve the localization of an outdoor mobile robot. The proposed algorithm estimates, by using an extended Kalman filter, the main kinematic parameters of the vehicles, such as the wheel radii and the wheelbase as
[...] Read more.
This paper describes an automatic calibration procedure adopted to improve the localization of an outdoor mobile robot. The proposed algorithm estimates, by using an extended Kalman filter, the main kinematic parameters of the vehicles, such as the wheel radii and the wheelbase as well as the magnetometer offset. Several trials have been performed to validate the proposed strategy on a tracked electrical mobile robot. The mobile robot is aimed to be adopted as a tool to help humanitarian demining operations. Full article
Figures

Figure 1

Open AccessArticle
Deployment Environment for a Swarm of Heterogeneous Robots
Robotics 2016, 5(4), 22; doi:10.3390/robotics5040022 -
Abstract
The objective of this work is to develop a framework that can deploy and provide coordination between multiple heterogeneous agents when a swarm robotic system adopts a decentralized approach; each robot evaluates its relative rank among the other robots in terms of travel
[...] Read more.
The objective of this work is to develop a framework that can deploy and provide coordination between multiple heterogeneous agents when a swarm robotic system adopts a decentralized approach; each robot evaluates its relative rank among the other robots in terms of travel distance and cost to the goal. Accordingly, robots are allocated to the sub-tasks for which they have the highest rank (utility). This paper provides an analysis of existing swarm control environments and proposes a software environment that facilitates a rapid deployment of multiple robotic agents. The framework (UBSwarm) exploits our utility-based task allocation algorithm. UBSwarm configures these robots and assigns the group of robots a particular task from a set of available tasks. Two major tasks have been introduced that show the performance of a robotic group. This robotic group is composed of heterogeneous agents. In the results, a premature example that has prior knowledge about the experiment shows whether or not the robots are able to accomplish the task. Full article
Figures

Figure 1

Open AccessArticle
Towards an Explanation Generation System for Robots: Analysis and Recommendations
Robotics 2016, 5(4), 21; doi:10.3390/robotics5040021 -
Abstract
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpected observations and partial descriptions extracted from sensor observations. Existing explanation generation systems draw on ideas that can be mapped to a multidimensional space of system characteristics, defined by
[...] Read more.
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpected observations and partial descriptions extracted from sensor observations. Existing explanation generation systems draw on ideas that can be mapped to a multidimensional space of system characteristics, defined by distinctions, such as how they represent knowledge and if and how they reason with heuristic guidance. Instances in this multidimensional space corresponding to existing systems do not support all of the desired explanation generation capabilities for robots. We seek to address this limitation by thoroughly understanding the range of explanation generation capabilities and the interplay between the distinctions that characterize them. Towards this objective, this paper first specifies three fundamental distinctions that can be used to characterize many existing explanation generation systems. We explore and understand the effects of these distinctions by comparing the capabilities of two systems that differ substantially along these axes, using execution scenarios involving a robot waiter assisting in seating people and delivering orders in a restaurant. The second part of the paper uses this study to argue that the desired explanation generation capabilities corresponding to these three distinctions can mostly be achieved by exploiting the complementary strengths of the two systems that were explored. This is followed by a discussion of the capabilities related to other major distinctions to provide detailed recommendations for developing an explanation generation system for robots. Full article
Figures

Figure 1

Open AccessArticle
Towards Bio-Inspired Chromatic Behaviours in Surveillance Robots
Robotics 2016, 5(4), 20; doi:10.3390/robotics5040020 -
Abstract
The field of Robotics is ever growing at the same time as posing enormous challenges. Numerous works has been done in biologically inspired robotics emulating models, systems and elements of nature for the purpose of solving traditional robotics problems. Chromatic behaviours are abundant
[...] Read more.
The field of Robotics is ever growing at the same time as posing enormous challenges. Numerous works has been done in biologically inspired robotics emulating models, systems and elements of nature for the purpose of solving traditional robotics problems. Chromatic behaviours are abundant in nature across a variety of living species to achieve camouflage, signaling, and temperature regulation. The ability of these creatures to successfully blend in with their environment and communicate by changing their colour is the fundamental inspiration for our research work. In this paper, we present dwarf chameleon inspired chromatic behaviour in the context of an autonomous surveillance robot, “PACHONDHI”. In our experiments, we successfully validated the ability of the robot to autonomously change its colour in relation to the terrain that it is traversing for maximizing detectability to friendly security agents and minimizing exposure to hostile agents, as well as to communicate with fellow cooperating robots. Full article
Figures

Figure 1

Open AccessArticle
Terrain Perception in a Shape Shifting Rolling-Crawling Robot
Robotics 2016, 5(4), 19; doi:10.3390/robotics5040019 -
Abstract
Terrain perception greatly enhances the performance of robots, providing them with essential information on the nature of terrain being traversed. Several living beings in nature offer interesting inspirations which adopt different gait patterns according to nature of terrain. In this paper, we present
[...] Read more.
Terrain perception greatly enhances the performance of robots, providing them with essential information on the nature of terrain being traversed. Several living beings in nature offer interesting inspirations which adopt different gait patterns according to nature of terrain. In this paper, we present a novel terrain perception system for our bioinspired robot, Scorpio, to classify the terrain based on visual features and autonomously choose appropriate locomotion mode. Our Scorpio robot is capable of crawling and rolling locomotion modes, mimicking Cebrenus Rechenburgi, a member of the huntsman spider family. Our terrain perception system uses Speeded Up Robust Feature (SURF) description method along with color information. Feature extraction is followed by Bag of Word method (BoW) and Support Vector Machine (SVM) for terrain classification. Experiments were conducted with our Scorpio robot to establish the efficacy and validity of the proposed approach. In our experiments, we achieved a recognition accuracy of over 90% across four terrain types namely grass, gravel, wooden deck, and concrete. Full article
Figures

Figure 1

Open AccessArticle
Bio-Inspired Vision-Based Leader-Follower Formation Flying in the Presence of Delays
Robotics 2016, 5(3), 18; doi:10.3390/robotics5030018 -
Abstract
Flocking starlings at dusk are known for the mesmerizing and intricate shapes they generate, as well as how fluid these shapes change. They seem to do this effortlessly. Real-life vision-based flocking has not been achieved in micro-UAVs (micro Unmanned Aerial Vehicles) to date.
[...] Read more.
Flocking starlings at dusk are known for the mesmerizing and intricate shapes they generate, as well as how fluid these shapes change. They seem to do this effortlessly. Real-life vision-based flocking has not been achieved in micro-UAVs (micro Unmanned Aerial Vehicles) to date. Towards this goal, we make three contributions in this paper: (i) we used a computational approach to develop a bio-inspired architecture for vision-based Leader-Follower formation flying on two micro-UAVs. We believe that the minimal computational cost of the resulting algorithm makes it suitable for object detection and tracking during high-speed flocking; (ii) we show that provided delays in the control loop of a micro-UAV are below a critical value, Kalman filter-based estimation algorithms are not required to achieve Leader-Follower formation flying; (iii) unlike previous approaches, we do not use external observers, such as GPS signals or synchronized communication with flock members. These three contributions could be useful in achieving vision-based flocking in GPS-denied environments on computationally-limited agents. Full article
Figures

Figure 1