Next Article in Journal
Design and Preliminary Testing of a Continuum Assistive Robotic Manipulator
Previous Article in Journal
Online Multi-Objective Model-Independent Adaptive Tracking Mechanism for Dynamical Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Tutorial

System-Level Testing and Evaluation Plan for Field Robots: A Tutorial with Test Course Layouts

by
William R. Norris
* and
Albert E. Patterson
Department of Industrial and Enterprise Systems Engineering, University of Illinois at Urbana-Champaign, Transportation Building 117, 104 South Mathews Avenue, Urbana, IL 61801, USA
*
Author to whom correspondence should be addressed.
Robotics 2019, 8(4), 83; https://doi.org/10.3390/robotics8040083
Submission received: 22 August 2019 / Revised: 19 September 2019 / Accepted: 19 September 2019 / Published: 22 September 2019
(This article belongs to the Section Agricultural and Field Robotics)

Abstract

:
Field robotics is a very important sub-field of robotic systems, focusing on systems which need to navigate in open, unpredictable terrain and perform non-repetitive missions while monitoring and reacting to their surroundings. General testing and validation standards for larger robotic systems, including field robots, have not been developed yet due to a variety of factors including disagreement over terminology and functional/performance requirements. This tutorial presents a generalized, step-by-step system-level test plan for field robots under manual, semi-autonomous/tele-operated, and autonomous control schemes; this includes a discussion of the requirements and testing parameters, and a set of suggested safety, communications, and behavior evaluation test courses. The testing plan presented here is relevant to both commercial and academic research into field robotics, providing a standardized general testing procedure.

1. Introduction

A significant aspect of robotic system design involves ensuring, through the testing and evaluation of platforms, that requirements and quality standards are satisfied. There are important benefits derived from direct, physical platform tests; evaluation of systems, subsystems, and individual components, especially early in the design cycle, helps to both determine the realistic system capabilities and identify design flaws [1,2,3,4,5]. Identifying the system performance envelope (the limits of system capability) helps the stakeholders decide on the application space, supports platform improvement, and provides realistic limitation information to the customer [6,7,8]. Testing later in the cycle is also required to verify and validate that functional, performance and quality requirements and specifications are satisfied [7,9,10]. Evaluations can be used to drive standards, which are used to set a minimum acceptable level of performance across an industry and potentially create a commercial and performance advantage [11,12,13]. Unfortunately, general system performance standards for larger platforms have not yet been fully developed, primarily due to a lack of consensus within the robotic system community; some problem areas include disagreement on the definition of system intelligence [14,15,16,17,18], a lack of concurrence on general robotic system functional and performance requirements [19,20,21,22,23], and the lack of general and widely-applicable definitions of appropriate metrics to assess and characterize behavior capabilities [24,25,26,27].
Toward the development of these standards and uniform testing and evaluation methods for field robotics, this tutorial examines the problem logically and develops a testing procedure and series of evaluation courses. The courses are adapted for general field robotic platforms to conduct systems analysis, finding and fixing performance issues, and defining capability limitations in a general basic field robotic platform. Field robotic systems may tackle a variety of jobs, such as agricultural work [28,29,30,31,32,33,34,35,36] (Figure 1a,b), firefighting [37,38,39], construction/demolition [40,41,42] (Figure 1c), search-and-rescue missions [43,44,45] (Figure 1d), and work in explosive or hazardous environments [46,47,48,49,50] (Figure 1e), among others. They are generally small platforms (i.e., small vehicles) and may be controlled by-wire (manual control), semi-autonomous or tele-operated control (SA/TO), or may be controlled autonomously. According to the Carnegie Mellon University Robotics Institute [51], a good general definition for field robotics is
“the use of mobile robots in field environments such as work sites and natural terrain, where the robots must safeguard themselves while performing non-repetitive tasks and objective sensing as well as self-navigation in random or dynamic environments.”
The purpose of this series of evaluations will be to assess the effectiveness (in terms of safety, communication, and behavior) of a given field robotic system; the test courses and testing procedures presented can easily be adapted for a wide variety of platforms. In order to assess the system and define (or take advantage of) its performance envelope, an analysis of the system’s response to a variety of environmental conditions is required. From this analysis, the system can be characterized and the capability limitations for the platform defined, both to establish (or better define) its performance envelope and to identify fundamental design flaws in the system. The results will be used to verify that system requirements have been attained during platform design, to assess and diagnose performance limiters, and to determine where technological enhancements are necessary or possible in further design iterations. In the development of a testing procedure for a specific field robotic system performance and behavior, five objectives should be satisfied:
  • Assess safety, localization and navigation accuracy, and obstacle detection and avoidance.
  • Develop a series of useful evaluations that will determine or define the performance envelope of field robotic platforms and which may serve as a foundation for future platforms including vehicle and robotic system performance.
  • Develop quantifiable metrics and procedures for evaluating the system behavior.
  • Based on time limitations, develop a set of evaluations that can be accomplished within a short time and minimal cost, depending on the needs of the customer and other stakeholders.
  • Ensure that a broad range of potential environments are well represented in the testing and evaluation plan.
This tutorial presents the background (Section 1), the inputs, the procedures, and other important information over several sections of this paper; these include a description of the basic approach (Section 2), a discussion of modes, functions, and behaviors of operation (Section 3), developed test courses for safety and communication (Section 4), developed test courses for behavior evaluation (Section 5), a discussion of obstacle use in the behavior test courses (Section 6), a discussion of how to assess and define terrain complexity (Section 7), and use notes and conclusions (Section 8).

2. Testing Overview and Approach

The testing space of any field robotic system is potentially very complex, often requiring a large number of tests and evaluations. One of the first tasks for the stakeholders before initiating testing is to agree on the set of the requirements for it; this is both to build a testing plan and to ensure that all relevant behaviors are considered within the desired testing framework. Table 1 is an example of such a table, where a set of behavior testing requirements are specified in a matrix relative to the basic control modes for the system (i.e., manual, SA/TO, and autonomous). For each of the cells, it should be decided if the behavior must be tested (“requirement”) under the given mode or if it is a desirable outcome (“goal”) but not required; blank cells indicate that testing will not be done for that combination. Table 1 is just an illustration of the process, as the actual set of behavior testing requirements and testing combinations will be system- and stakeholder-dependent.
In order to ensure an optimal evaluation process, and diagnose issues as they begin to occur, the evaluations should begin with the simplest case, with the addition of complexity based on successful platform evaluations. Determining or defining the performance envelope of the system is essential to organizing this, as it will determine the complexity of the tests. There are numerous important considerations within this performance envelope, the importance and dominance of which will depend on the nature of the robotic system and the testing requirements. Three of the most basic are the mode/behavior complexity (i.e., how complex a mission the system can accomplish), the terrain complexity (i.e., the basic operation conditions), and the obstacle complexity (i.e., the complexity and difficulty of the obstacles that the robot must negotiate). These three considerations are shown relative to an example performance envelope in Figure 2. Other sets of vital considerations are environmental complexity (e.g., number of local trees, presence of human or autonomous robot adversaries to negotiate, etc.), weather conditions, GPS availability, and other relevant concerns. While behavior and obstacles complexity are common considerations for all robotic systems, terrain complexity is a much more important consideration for field robots than those working in a well-defined or flat, open terrain [29,52,53,54]. The testing conditions chosen should match the performance envelope of the system as closely as possible (within the resources and time available) in order to accomplish the most realistic possible scenario for testing. Reviewing the definition of field robotics [51], it is clear that real conditions will be too unpredictable and complicated to simulate perfectly, as is the case for most real world systems, but careful test planning can mitigate this and simulate real conditions as well as possible.
In order to provide the most efficiency during the tests and to ensure that the performance envelope captures the full behavioral capability, test courses will be evaluated with increasing obstacle complexity. It should begin with the simplest possible case (i.e., clear path with no obstacles) and work up in stages to static, dynamic, and mixed obstacle environments (Figure 2). The best approach for organizing these tests is to select an operating environment and then gradually increase the complexity of each test to explore the entire range of the performance envelope. In this approach, the obstacle complexity should be evaluated first (x-axis, Figure 2), then mode/behavior complexity (y-axis, Figure 2), then terrain complexity (z-axis, Figure 2), and then any other considerations. In terms of steps (a sort of generic algorithm), this would take the form of:
  • Select given environment and given mode/behavior to evaluate
  • Gradually increase obstacle complexity until either the maximum intended complexity is reached successfully OR failure occurs and a design flaw is exposed. If successful, move to the next step; if failed, exit testing and return to the design phase.
  • Repeat #1–2 for each of the mode/mode behavior levels
  • Gradually increase the terrain complexity (repeating #1–3 for each case) until either the maximum intended complexity is reached successfully OR failure occurs and a design flaw is exposed. If successful, move to the next step; if failure occurs, exit testing and return to the design phase.
  • Continue until all complexity dimensions are examined according to the test plan, repeating cycles #1–4, and so on.
The exact modes/settings used for each of the tests will be specific to the system and its expected operating theater; these will be discussed in depth in Section 3. The robot itself will be considered as a sort of “black box” interacting with its chosen environment and all of the settings and parameters should be based on this consideration. Analysis of subsystems and system identification based on collected data will also be discussed in later sections.
It is assumed in this kind of testing approach that the desired performance envelope for the system is known or predicted before the tests. If these tests are used to explore/establish the limits of the envelope, then the failures in the tests should be carefully evaluated and it should be determined whether these are due to design flaws or to true limits of the performance boundary for the system. The exact number of steps in each case (and therefore the total number of tests) will depend on the testing plan developed by the stakeholders and will vary for each system. The structure of the evaluation plan should consist of five parts, namely (1) the purpose, (2) the requirements for the test, (3) identification or definition of the test variables, (4) specifications for the test, and (5) the final procedure for completing the test and evaluating the results. Figure 3 shows this plan graphically. As previously discussed, any test should be initialized without any obstacles and then increased in complexity as the testing sequence is completed.

3. Modes, Functions, and Behaviors of Operation

Successful testing and evaluation requires careful and rigorous definition of the modes, functions, and expected system behaviors. The vehicle itself should be considered to be a “black box”, which needs to interact with an unpredictable environment in the completion of its mission. System input should be varied starting from the subsystems and going to the level of multiple systems while assessing and collecting data on the qualitative aspects of the system response (i.e., system identification). Important parameters related to each mode, behavior, or function of a field robotic system are described in this section from a general perspective to act as a staring point for analysis of a specific system. As previously discussed, the exact parameters, their settings, and their order/level of importance will vary between systems and must be determined or defined by stakeholders before testing begins.

3.1. Evaluated Modes and Functions

For a general field robotic system, the essential modes and functions of the system can be divided into two categories, namely those related to the vehicle design and those related to the robotics. The modes and functions related to the vehicle are mainly mechanical, such as stability, braking, actuation, and other things; on the other hand, those related to the robotics generally consist of control, software, and electrical elements of the system. Table 2 shows an example table of elements in both categories. This table is very important for the testing plan, as it helps to define the elements/subsystems of the robotic system and to define them for useful analysis. The stakeholders should ensure that the modes and functions table for the system to be testing is complete, consistent, and well-designed to ensure smooth testing and interpretation/communication of the testing results. This should be done as early in the design process as possible and certainly before any system testing is initialized.

3.2. Diagnostic Performance Parameters

In addition to providing evaluation criteria for determining or defining system capabilities, a further objective should be to provide information on design flaws and system errors to they can be found and (ideally) mitigated. In order to do this, it is essential that the evaluators understand the function and modes of each subsystem and their interfaces. Not all of the data may be used or analyzed when mapping and evaluating system capabilities, but the presence of useful data are invaluable in determining areas of required system improvement. Therefore, all the tests should acquire, at a minimum, the following information:
  • All inputs to all behaviors,
  • All outputs from all behaviors including those sent to the lower-level control system,
  • All vehicle level control inputs and outputs, including feedback from actuators.
Since the overall system is treated like a black box, it is vital that all inputs to the system be carefully controlled and recorded in order match the input with the expected behavior. Lower-level system behavior is vital for tracking and evaluating subsystem performance as well.

3.3. Additional Logged Evaluation Information

In addition to the basic information about system performance, there are several additional pieces of information that may be vital to understanding performance, especially if a failure occurs during testing. These include, but are not limited to,
  • Carefully-defined testing documentation: Test date, test start time, time of day, and other relevant information. It is recommended that the evaluation team devise a numbering and tracking system for tests, especially when many tests are done or the same set of tests are done on different types of field robots.
  • Environmental conditions: General terrain conditions, local foliage, natural obstacles (trees, rocks, and similar), weather conditions (especially note any change in weather conditions during the tests), barometric pressure, relative humidity, local force of gravity, and similar relevant information should be recorded.
  • GPS conditions: Availability of GPS for localizing or other uses is vital information for assessing the outcome of tests, especially if the localization and navigation system depends on GPS. It is recommended that the GPS quality and strength be recorded in real-time during any tests using it using a method independent of the system being tested.
  • Location of natural and artificial obstacles: Carefully record the GPS or map-coordinate locations of each of the obstacles. It is recommended that they be tagged as natural or artificial, as well as whether they are likely to interfere with navigation and communication to the system; interference will primarily occur if a large obstacle blocks communication with the system or blocks effective GPS coverage. Tabulating the data for each of the obstacles should help, using a layout similar to the example table shown in Table 3. Note that GPS coordinates may not be possible to collect for a dynamic obstacle, but effort to be made in order to have the most rigorous possible testing record.
  • Design, location, and layout of the testing course
  • Finally, carefully note any unusual or unexpected events during testing (sudden drop in temperature, nearby lighting strike, encounters with animals or unauthorized humans, etc.)
When the stakeholders are compiling the list of additional parameters, care should be taken to include anything that may have some secondary or cause-and-effect relationship with the basic system parameters. What these are is entirely system- and environment-dependent, so it is vital that the design and evaluation teams understand the testing environment very well. These additional data points can greatly speed up diagnostics in the case of a failure or dramatic divergence in expected behavior by the system. This is especially true if the design team is not local or a test is being done on a commercial system and failure diagnostics will require the assistance of the OEM.

3.4. Evaluated Modes and Functions

The modes of operation/control for the system under consideration can vary dramatically between systems and should be firmly established by the stakeholders. In general, these will be manual control, SA/TO, or autonomous operation or some combination of these. For field robots, waypoint navigation is one of the possible configurations for the autonomous mode. Several functions should be considered for evaluation for all field robot systems, with those relevant to the system operation being included in the tests:
  • Safety,
  • Communication,
  • Manual control,
  • Semi-autonomous/tele-operation (SA/TO),
  • Autonomous navigation (GPS/waypoint navigation).
Note that many tests will involve only some of these evaluation areas, so the complexity of the actual system should be used to determine which are relevant given the mission and testing budget of the system. While these areas of evaluation can be further broken down, at the top level, they can be divided as shown in the list above. Further examining these areas, some qualitative performance parameters for these areas can be identified (Table 4). These, among other important parameters identified by the stakeholders, should be evaluated during testing in each of the five main areas identified above. Figure 4 shows examples of some of the important parameters for system evaluation, including path discontinuities, heading and distance errors, settling time, settling distance, path overshoot, and phase lead/lag.

3.5. Safeguarding: Obstacle Detection and Avoidance Evaluations

In field robotic system evaluations, an important factor is the delineation between obstacles and objects. Objects are defined as “anything detected by the system in the environment that may impede progress or that can be used for localization and guidance,” while obstacles are defined as “objects that lie in the direct path of the vehicle or threaten to be in its path at a future time”. The objective of a safeguarding evaluation is to determine if the system reaction to both objects and obstacles is appropriate. For the remainder of this paper, the obstacles will be defined as either static (stationary and not moving, such as a rock or tree), dynamic (moving and may be unpredictable, such as an animal or adversarial robot), and negative (such as a hole or body of water).
Figure 5 shows the convention and symbols used to describe each of these cases; marker flags will be used to localize the system or to mark waypoints for the system. Examples of obstacles a field robotic system could encounter during use are shown in Figure 5a–e below. There may be large static obstacles which are so dense that they may interfere with communication and control by blocking signals and strong GPS coverage (such as large trees), as well a simple static ones (markers, small posts, cones, etc.). Dynamic obstacles may be considered mobile dynamic (such as an animal who may be hostile toward the vehicle) or stationary dynamic (such as a sprinkler, which is stationary but the streams of water are dynamic and can have a dynamic impact on the vehicle). Finally, negative obstacles are best represented by potholes in the operational path of the vehicle.
When encountering obstacles, the operator or control system will need to locate it within the framework of the system, decide if it is static, dynamic, or negative, classify it as an object or obstacle, and determine if it may be a threat to the system (especially if it is dynamic). If it is determined to be an obstacle, is dynamic, or presents a threat to the vehicle, the control system (or operator, depending on the system) will then need to respond to it. In the case where a single static or dynamic obstacle is encountered, the most common response will be to stop the vehicle or to bypass the obstacle by driving around it. Figure 6 and Figure 7 demonstrate this behavior for simple cases. In most cases, the dynamic obstacles will require a larger minimum stopping distance and obstacle clearance distance due to its unpredictable behavior. Generally, the negative obstacles will require a similar response, but this cannot always be determined without further evaluation and examination of the obstacle. In many cases, it should behave in the same way as a static object or obstacle, but detection by on-board sensors may be difficult; this is an area that needs further research and refinement, but it can be simulated for simple cases as shown here.
In a realistic scenario, it is likely that there will be several or many obstacles/objects to evaluate and negotiate, so the obstacle-based evaluations will need to be gradually made more complex, beginning with simple static obstacles and moving up to cases with many mixed types of obstacles. Depending on the nature of the obstacle avoidance behavior, the parameters for evaluation will include the minimum stopping distance. If the platform is capable of bypassing the obstacle, the minimum clearing distance will be determined as demonstrated in the figure below. The GPS coordinates of all obstacles will be required for all evaluations as well as the vehicle path and the required path.

4. Safety and Communication Evaluation Courses

This section provides directions and suggested layouts of courses to test the vehicle safety and communication during operation. Some modification may be necessary depending on the system under study, but this approach is a good general starting place for generating a testing plan. During these tests, the autonomous mode will be guided using waypoint navigation or GPS (or another useful method), following a generated path to complete a mission and responding to encountered obstacles. During SA/TO tests and manual control, the course should be laid out using objects undetectable by the vehicle but visible to the operator (visually or through an on-board camera). Due to the nature of testing unproven platforms, the remote kill switch (“e-stop”) should be monitored at all times, with the operator prepared to activate it at any time should the vehicle react incorrectly or become uncontrollable. Finally, the testing should be done in a remote location, away from any human or animal bystanders and located so that system failures and accidents will have the smallest possible negative impact.

4.1. Safety Evaluation 1 (SE1): Stop Control Stop Response

As described in Figure 3, the evaluation plan should be laid out in the five basic steps, (1) purpose, (2) requirements, (3) variables, (4) specifications, and (5) final procedure. The first four are shown in Table 5 below, followed by the complete test procedure.

SE1 Testing Procedure

  • The vehicle will proceed from the location designated in the diagram (Figure 8) from a stopped position and accelerate to its required speed
  • At the end of the first run in the design speed region, remote stop or brakes are applied
  • Data files will be appropriately labeled for record and analysis for each mode and iteration
  • Data will be post processed based on the measurement criteria presented to evaluate the vehicle’s performance developed by the stakeholders
  • Manual control mode test iterations:
    (a)
    Iteration 1: Operator on user interface sets vehicle to design speed along course and user interface operator applies control stop
    (b)
    Iteration 2: Operator on user interface sets vehicle to design speed along course and a vehicle escort applies e-stop
    (c)
    Iteration 3: Operator on user interface accelerates vehicle to maximum speed along course and user Interface operator applies control stop
    (d)
    Iteration 4: Operator on user interface accelerates vehicle to maximum speed along course and vehicle escort applies e-stop
  • SSA/TO mode test iterations: In this case, it is assumed that the e-stop is part of the manual portion of the system, so the iterations are the same as this for the manual control mode.
  • In autonomous mode, the vehicle will be provided with the coordinates of the course boundaries and will generate a trajectory for the vehicle to follow. Since the course for this test is very simple, additional waypoints or navigation features are likely not necessary. The application plan for the e-stop will also generally be configured the same as that for the manual and SA/TO modes and so it should follow the same iterations.

4.2. Safety Evaluation 2 (SE2): External Detectability of Vehicle Warning Devices

This test is the second of the safety tests and examines the usefulness of vehicle warning devices. The focus will be on ensuring that any nearby humans or animals can easily locate and avoid the vehicle during operation. Note that this tests may not be needed or desirable for military applications or others in which the location of the vehicle should not be advertised. Table 6 shows the basics for this test, which is followed by the detailed procedure for completing it.

SE2 Testing Procedure

  • The vehicle will remain stationary.
  • The vehicle will be put into manual, SA/TO, and autonomous modes in different iterations to ensure that all modes drive the warning system effectively.
  • The warning signals (i.e., lights, beeps, sounds) will be gauged for their effective detectable range from the vehicle.
  • Regions where the audio and visual cues may not be heard or seen should be noted.

4.3. Safety Evaluation 3 (SE3): Obstacle Detection Verification

This test, the third of the safety and communication tests, will evaluate the range and reliability of the detection instruments (cameras, LIDAR, etc.) on the vehicle. The setup for the test is shown in Table 7 below. Figure 9a shows the basic course layout, measured in terms of radius from the vehicle. Example regions expected to be covered by various kinds of detection tools (2D LIDAR, 3D LIDAR, camera, and ultrasonic sensors), as well as an example of a detection blind spot, are demonstrated in Figure 9a,b.

SE3 Testing Procedure

  • The vehicle will remain stationary.
  • The vehicle will be put into autonomous navigation mode.
  • Place objects in locations around the vehicle (objects of various sizes and shapes, such as flags, traffic cones, balls, rocks, tree branches, etc.).
  • Determine if the obstacle detection regions are the same as those in the requirements.
  • Find the blind spots for each of the detection methods and note on a diagram similar to Figure 9a,b.

4.4. Communications Evaluation: Communication Range and Effects

This final safety and communications evaluation will determine the effective limitations of the communication system within the vehicle, control system, and user interface. The setup of the tests is shown in Table 8, followed by a detailed testing procedure and figure to demonstrate the concept.

Communications Evaluation Testing Procedure

  • The trajectory planner for the vehicle, or a required trajectory, will be provided with the relative coordinates of the course boundaries and will generate a trajectory for the vehicle (or operator) to follow. The operator on the user interface may provide the required path manually if needed.
  • The operator on the user interface will attempt to drive the required course to operate in the transition region and maximum range region.
  • Controllability issues on the user interface will be monitored and logged.
  • The course will be run in all modes (manual, SA/TO, and autonomous, as applicable).
  • The overall effective performance envelope will be determined based on controllability and the logged data.
  • The evaluators should take advantage of interruptions in line of sight and obstructions to test communication quality and reliability.
  • Data files will be appropriately labeled for record and analysis for each mode and iteration.
  • Data will be post-processed based on the measurement criteria contained in the vehicle requirements document and testing plan.
  • Manual operation:
    (a)
    Manual mode will be initiated on the evaluated platform.
    (b)
    The operator will drive the vehicle via the manual interface on the user interface.
    (c)
    The operator will drive the vehicle away from the communication broadcast antenna (Figure 10).
    (d)
    The vehicle will proceed from the location designated in the diagram (Figure 10) from a stopped position and the remote tele-operator will accelerate to design operating speed.
    (e)
    The operator will assess controllability and performance throughout operation until the system is out of range (Figure 10).
  • Semi-autonomous/tele-operated (SA/TO):
    (a)
    SA/TO mode will be initiated on the evaluated platform.
    (b)
    The remaining steps are the same as those for the manual mode.
  • Autonomous mode:
    (a)
    Autonomous navigation mode will be initiated on the evaluated platform.
    (b)
    The course layout or waypoints will be given to the vehicle through the user interface or on-board computer.
    (c)
    The vehicle will proceed from the location designated in the diagram (Figure 10) from a stopped position and the autonomous navigation behavior will accelerate to design operating speed.
    (d)
    The operator will monitor the vehicle for performance and safety until the system is out of range (Figure 10).

5. Behavior Evaluation Courses

Once the safety and communications testing courses have been completed, the field robotic system may then be tested for performance and behavior. The real test of system quality and design will be the behavior evaluations, so great care should be taken by the evaluators to conduct the tests carefully. The testing plans and courses presented in this section are for general field robot systems and may not be sufficient or applicable to all systems, so the stakeholders should make a rigorous testing and evaluation plan for their specific system that takes into account the requirements and realistic operating environment of the system.

5.1. Behavior Evaluation 1 (BE1)

The first of the behavior tests looks at basic locomotion, focusing on driving in a straight line, turning with a fixed radius, and turning sharply. Table 9 shows the test setup, followed by the detailed test procedure and a suggested course layout. Finally, the use of obstacles on this course will be discussed in detail, with example layouts given.

BE1 Testing Procedure

  • Manual mode:
    (a)
    Manual mode will be initiated on the evaluated platform.
    (b)
    The operator will drive the vehicle via the manual interface on the user interface.
    (c)
    The operator will drive the vehicle through the flag markers of the course (Figure 11).
    (d)
    The vehicle will proceed from the location designated Figure 11 from a stopped position and accelerate to design operating speed.
    (e)
    Test iterations will continue from a straight line, to courses in the left and right directions, iterating from the minimum turn radius specified in the system requirements, with each radius applied in both left and right directions.
    (f)
    Additional iterations will check the effect of course discontinuities on path following performance. The discontinuities will be applied in the left and right directions starting at a small angle and increasing in several steps.
    (g)
    Data files will be appropriately labeled for record and analysis for each mode and iteration.
    (h)
    Data will be post processed based on the measurement criteria presented to evaluate the vehicle’s performance the requirements document and testing plan.
  • Semi-autonomous/tele-operated (SA/TO) mode:
    (a)
    Tele-operation mode will be initiated on the evaluated platform.
    (b)
    The other steps for SA/TO mode are identical to Steps (b)–(h) for manual mode.
  • Autonomous navigation mode:
    (a)
    Autonomous navigation mode will be initiated on the evaluated platform.
    (b)
    The course layout or waypoint locations will be loaded on the vehicle.
    (c)
    The vehicle will navigate through the course autonomously, with the operator and vehicle supervisor only intervening in case of an emergency or loss of vehicle control.
    (d)
    The vehicle will proceed from the location designated in the diagram (Figure 11) from a stopped position and the on-board computer and autonomous controls will accelerate to design operating speed.
    (e)
    The remaining steps are identical to those described in Steps (f)–(h) for the manual mode.

5.2. Behavior Evaluation 2 (BE2)

This course tests the behaviors of the system (both vehicle and controller) under small disturbances and other effects relative to trajectory error and required effort to correct the error. The setup for the test is shown in Table 10, followed by the detailed evaluation procedure and suggested course layout.

BE2 Testing Procedure

  • Manual mode:
    (a)
    Manual mode will be initiated on the evaluated platform.
    (b)
    The operator will drive the vehicle via the manual interface on the User Interface.
    (c)
    The operator will drive the vehicle through the flag markers.
    (d)
    The vehicle will proceed from the location designated in Figure 12 from a stopped position and the remote operator will accelerate to design operating speed.
    (e)
    Data files will be appropriately labeled for record and analysis for each mode and iteration.
    (f)
    Data will be post processed based on the measurement criteria.
  • Semi-autonomous/tele-operation (SA/TO) mode:
    (a)
    SA/TO mode will be initiated on the evaluated platform.
    (b)
    The remaining steps are identical to those for the manual mode.
  • Autonomous navigation:
    (a)
    Autonomous navigation mode will be initiated on the evaluated platform.
    (b)
    The course layout will be loaded on the vehicle computer and will be used to navigate through the course.
    (c)
    The vehicle will proceed from the location designated Figure 12 from a stopped position and then accelerate to design operating speed.
    (d)
    Data files will be appropriately labeled for record and analysis for each mode and iteration.
    (e)
    Data will be post processed based on the measurement criteria.

5.3. Behavior Evaluation 3 (BE3)

This course tests the handling and navigation issues and effects of controller lag. The setup table is shown in Table 11 below; the detailed procedure, course layout, and course layout with obstacles are given as well.

BE3 Testing Procedure

  • Manual mode:
    (a)
    Manual mode will be initiated on the evaluated platform.
    (b)
    The operator will drive the vehicle via the manual interface on the user interface.
    (c)
    The operator will drive the vehicle through the flag markers.
    (d)
    The vehicle will proceed from the location designated in Figure 13 from a stopped position and the remote operator will accelerate to design operating speed.
    (e)
    Data files will be appropriately labeled for record and analysis for each mode and iteration.
    (f)
    Data will be post processed based on the established measurement criteria.
  • Semi-autonomous/tele-operated mode (SA/TO):
    (a)
    SA/TO mode will be initiated on the evaluated platform.
    (b)
    The remaining steps are identical to those done for manual mode.
  • Autonomous mode:
    (a)
    Autonomous mode will be initiated on the evaluated platform.
    (b)
    The course layout will be loaded on the vehicle computer.
    (c)
    The vehicle will navigate autonomously through the course.
    (d)
    The vehicle will proceed from the location designated in Figure 13 from a stopped position and the autonomous navigation mode will accelerate to design operating speed.
    (e)
    Data files will be appropriately labeled for record and analysis for each mode and iteration.
    (f)
    Data will be post processed based on the established measurement criteria.

5.4. Behavior Evaluation 4 (BE4)

This course, the fourth of the behavior evaluations presented in this tutorial, examines the steering response of the vehicle and controller to various conditions and disturbances. Table 12 provides the setup for the test, which is followed by the detailed procedure and an example layout of the course. The use of obstacles on the course is also presented, along with obstacle-loaded course layouts. This test will not examine manual mode operation, since this mode is not relevant to the goals of the evaluation.

BE4 Testing Procedure

  • Semi-autonomous/tele-operated (SA/TO):
    (a)
    SA/TO mode will be initiated on the evaluated platform.
    (b)
    The operator will drive the vehicle via the SA/TO interface on the user interface.
    (c)
    The operator will drive the vehicle through the flag markers.
    (d)
    The vehicle will proceed from the location designated in Figure 14 from a stopped position and the remote operator will accelerate to design operating speed.
    (e)
    Test iterations will continue from the minimum vehicle turn radius and incrementally increase the radius until it reaches the maximum specified radius or failure occurs.
    (f)
    Data files will be appropriately labeled for record and analysis for each mode and iteration.
    (g)
    Data will be post processed based on the established measurement criteria.
  • Autonomous navigation:
    (a)
    Autonomous mode will be initiated on the evaluated platform.
    (b)
    The course layout will be loaded on the vehicle computer.
    (c)
    The vehicle will navigate autonomously through the course.
    (d)
    The vehicle will proceed from the location designated in Figure 14 from a stopped position and the autonomous navigation mode will accelerate to design operating speed.
    (e)
    All remaining steps are identical to those for SA/TO mode.

5.5. Behavior Evaluation 5 (BE5)

The fifth and final suggested testing course for this tutorial examines the steering behavior of an autonomous controller using a spiral-shaped course. This course is not useful for manual and SA/TO modes, as it focuses on the controller. The setup table is shown below (Table 13), followed by a detailed testing procedure and an example course layout. Note that this course is not recommended for obstacle detection.

BE5 Test Procedure

  • Autonomous navigation mode will be initiated on the evaluated platform
  • The course layout will be loaded on the vehicle for autonomous navigation following a well-defined spiral pattern (Figure 15) described by
    x ( θ ) = 0 . 5 θ cos ( θ ) , y ( θ ) = 0 . 5 θ sin ( θ ) 0 θ 8 π .
  • The vehicle will proceed from the location designated in Figure 15 from a stopped position and the autonomous navigation behavior will accelerate to design operating speed.
  • Data files will be appropriately labeled for record and analysis for each mode and iteration.
  • Data will be post processed based on the established measurement criteria.

6. Behavior Evaluation Courses with Obstacles

The course outlined for this behavior evaluation can be repeated using static, dynamic, and negative obstacles once the evaluator is satisfied with the basic performance. Figure 16, Figure 17, Figure 18 and Figure 19 give example obstacle layouts for this for the first four behavior evaluations, where individual or groups of obstacles can be used to simulate a wide variety of situations. With static and dynamic objects and obstacles, their number, packing density, diversity, and layout can be varied, while negative obstacles should be evaluated for different diameters and depths. Due to the complexity of negative obstacles, it is recommended (until detection and avoidance technology is improved) that the behavior evaluations using them should use just single obstacles.

7. Terrain Complexity

Each of the test courses described in Section 4 and Section 5 can be run on a wide variety of terrain conditions. When selecting the terrain condition for testing, the worst-case scenario should be used, with settings at the upper values of the boundary conditions. Figure 20 shows examples of each of the major types of testing terrain previously described in this tutorial, namely (a) asphalt, (b) flat turf, (c) dirt road, (d) sand, (e) gravel, (f) high grass, (g) forest, and (h) urban. For each of the conditions tested, a table should be made with the major parameters for specific local conditions; Table 14 gives an example for a gravel road. In this case, the best example of a negative obstacle would be road damage and potholes. Ideally, these terrain condition tables should be included in the system requirements document but certainly should be done before any testing.

8. User Notes and Conclusions

This tutorial presented a method for realistic testing and evaluation of field robot systems, focusing on safety, communication, and vehicle/controller behavior. Physical evaluation of platforms is essential for robust and rigorous system design, helping to find system flaws and needed future technology development, ensuring that all requirements for the system are satisfied by the final design (i.e., system verification), and providing confidence that all important system characteristics are captured in the requirements (i.e., system validation). Rigorous testing and verification/validation are essential elements for system accreditation, a major milestone on the way to producing a marketable product or military-ready system.
The user of this tutorial should note some limitations, advice, caveats, and assumptions contained within it and make any necessary adjustments before implementing the suggested testing plan and evaluation courses. These include:
  • It is assumed that the vehicle size will be that of a typical field robot, i.e., large enough to complete a complex mission but small enough to be transported with a truck and trailer (larger than small lawnmower up to a standard farm tractor). For larger vehicles, the course distances and the location of the flags (for manual and tele-operated modes) may need to be appropriately modified.
  • All of the needed safety and communication evaluations must be done before any behavioral ones. The reason for this is obvious, as it is vital that the system remain safe and under the control of a human operator at all times when the system is powered on, even when it is not currently in a test course.
  • The testing courses suggested include three safety courses, a communications course, and five behavior courses. All evaluate different aspects of the system and all should be done for the most complete testing sequence. However, completing all of them may be very expensive and time-consuming; the design and evaluation teams (and other stakeholders, as appropriate) should carefully evaluate the system requirements and ensure that any tests done are effective and add value to the process. This should not be taken as license to cut corners or only test to a minimum to save time and money. A reasonable balance should be struck to gather the most information possible in an efficient way within the allowable testing timeframe.
  • There may be cases in system development where some tests may not be needed simply because the system utilizes controller or vehicle parts which are already proven and verified. In these cases, testing can be skipped but an interface–effects analysis should be done to ensure that the previously verified/validated system component does not have unknown or unpredictable effects on the system due to interfacing with new parts and subsystems.
  • Care should be taken to avoid any environmental damage or waste during the testing process; though not explicitly stated during the courses, this is an important consideration that cannot be cut back or ignored. For these tests, relatively simple actions can accomplish most of this, such as ensuring that any natural testing courses are cleaned up after testing and using efficient means for charging batteries. Any crashed or disabled hardware must be recovered as quickly as possible to avoid site contamination from the batteries, electrical elements, and plastic components.
  • Some of the courses require two operators (one to remotely control or monitor the vehicle and one to escort the vehicle); however, for safety reasons, it is recommended that there be several people present during testing and that the operators be changed out frequently to avoid fatigue and distraction.
  • Vehicles customized for specific terrain (such as hillsides or near bodies of water) may require some additional testing, but this will need to be determined by the testing team. This is an important area of future research work related to field robot testing.
  • This tutorial assumes that the vehicles being tested are electric in general (i.e., battery powered) but other power sources could be used easily (i.e., internal combustion engines or compressed gas). In these cases, the response time of the power source and controller must be taken into account when selecting course marker distances and navigation methods. Vibration from an engine would also need to be considered relative to the performance of the platform, but this is generally considered to be a basic design problem and therefore outside the scope of this tutorial.
  • If a vehicle or controller are showing signs of unreliable behavior or the vehicle becomes unstable/uncontrollable during earlier tests, it is imperative that testing cease until these problems are addressed. The sequence of test courses generally becomes more complicated, so it becomes a great risk to continue testing after a system flaw is uncovered. This could pose a risk to the operators, to any nearby bystanders (human or wildlife), and to the local environment.
  • It is assumed in these tests that the region will have reasonably good GPS coverage; if this is not the case, then additional testing elements will need to be added to ensure safe and effective navigation of the autonomous and some SA/TO vehicles.
  • The terrain complexity was discussed, but it is possible for the local terrain for the tests to be a mixture of these or a different environment completely. The terrain evaluation is something the evaluation team should be careful about; it is suggested that any testing area other than a university campus or company/government testing ground should be visited by the stakeholders and mapping using UAVs before a testing plan is established for the system.
  • When using the courses with obstacles, care should be taken to use realistic obstacles and objects for the system (or operator) to detect and negotiate. These should also be collected and safely stored or disposed of after the testing courses are completed to avoid any local environmental damage.
This tutorial, when used under the stated assumptions, will be a useful guide for real-world testing of field robotic systems under manual, semi-autonomous/tele-operated, and autonomous control schemes. It will benefit both commercial (including military contractor work) and academic researchers, giving a logical and easy-to-follow testing guide which may be used for whole field robotic systems or only parts (vehicles, controllers, communication systems, instruments, etc.), as needed by the user.

Author Contributions

W.R.N. developed the approach and associated testing concepts and test plan as a personal project while working in the robotics industry and holds the copyright for the original concept and plan, validating it using several proprietary field robotics systems. A.E.P. completed the background review, introduction, and user notes, provided a systems engineering perspective in some sections, assisted with reviewing and generalizing the given approach, and created some figures and adapted others from previous unpublished technical reports. Both authors contributed to writing and editing the manuscript.

Funding

This work received no external funding.

Conflicts of Interest

The authors declare no conflict of interest. No external funding was used to perform the work described in this study. Opinions and conclusions presented in this work are solely those of the authors.

References

  1. Lolas, S.; Olatunbosun, O. Prediction of vehicle reliability performance using artificial neural networks. Expert Syst. Appl. 2008, 34, 2360–2369. [Google Scholar] [CrossRef]
  2. Kalra, N.; Paddock, S.M. Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Transp. Res. Part A Policy Pract. 2016, 94, 182–193. [Google Scholar] [CrossRef]
  3. Carlson, J.; Murphy, R. Reliability analysis of mobile robots. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), Taipei, Taiwan, 14–19 September 2003. [Google Scholar] [CrossRef]
  4. Blanchard, B.S.; Fabrycky, W.J. Systems Engineering and Analysis, 4th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2005. [Google Scholar]
  5. Pahl, G.; Beitz, W.; Feldhusen, J.; Grote, K.H. Engineering Design: A Systematic Approach, 3rd ed.; Springer: Berlin, Germany, 2007. [Google Scholar]
  6. Miles, F.; Wilson, T. Managing project risk and the performance envelope. In Proceedings of the APEC 1998 Thirteenth Annual Applied Power Electronics Conference and Exposition, Anaheim, CA, USA, 5–19 February 1998. [Google Scholar] [CrossRef]
  7. Wasson, C.S. System Analysis, Design, and Development; Wiley-Inderscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  8. NASA. NASA Systems Engineering Handbook: NASA/Sp-2016-6105 Rev2-Full Color Version; 12th Media Services: Suwanee, GA, USA, 2017. [Google Scholar]
  9. O’Keefe, R.M.; O’Leary, D.E. Expert system verification and validation: A survey and tutorial. Artif. Intell. Rev. 1993, 7, 3–42. [Google Scholar] [CrossRef]
  10. Sargent, R.G. Verification and validation of simulation models. In Proceedings of the Proceedings of the 2010 Winter Simulation Conference, Baltimore, MD, USA, 5–8 December 2010. [Google Scholar] [CrossRef]
  11. Ma, H. Competitive advantage and firm performance. Compet. Rev. 2000, 10, 15–32. [Google Scholar] [CrossRef]
  12. Stewart, G. Supply chain performance benchmarking study reveals keys to supply chain excellence. Logist. Inf. Manag. 1995, 8, 38–44. [Google Scholar] [CrossRef]
  13. Hua, S.Y.; Wemmerlov, U. Product Change Intensity, Product Advantage, and Market Performance: An Empirical Investigation of the PC Industry. J. Prod. Innov. Manag. 2006, 23, 316–329. [Google Scholar] [CrossRef]
  14. Beni, G.; Wang, J. Swarm Intelligence in Cellular Robotic Systems. In Robots and Biological Systems: Towards a New Bionics? Springer: Berlin/Heidelberg, Germany, 1993; pp. 703–712. [Google Scholar] [CrossRef]
  15. Parker, L.E. Distributed intelligence: overview of the field and its application in multi-robot systems. J. Phys. Agent. (JoPha) 2008, 2, 5–14. [Google Scholar] [CrossRef]
  16. Blum, C.; Groß, R. Swarm Intelligence in Optimization and Robotics. In Springer Handbook of Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2015; pp. 1291–1309. [Google Scholar] [CrossRef]
  17. Bryson, J.; Winfield, A. Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems. Computer 2017, 50, 116–119. [Google Scholar] [CrossRef]
  18. Qureshi, A.H.; Nakamura, Y.; Yoshikawa, Y.; Ishiguro, H. Robot gains social intelligence through multimodal deep reinforcement learning. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016. [Google Scholar] [CrossRef]
  19. Sweet, L.; Good, M. Re-definition of the robot motion control problem: Effects of plant dynamics, drive system constraints, and user requirements. In Proceedings of the The 23rd IEEE Conference on Decision and Control, Las Vegas, NV, USA, 12–14 December 1984. [Google Scholar] [CrossRef]
  20. Messina, E.; Jacoff, A. Performance standards for urban search and rescue robots. In Unmanned Systems Technology VIII; Gerhart, G.R., Shoemaker, C.M., Gage, D.W., Eds.; SPIE: Bellingham, WA, USA, 2006. [Google Scholar] [CrossRef]
  21. Summers, M. Robot Capability Test and Development of Industrial Robot Positioning System for the Aerospace Industry; SAE Technical Paper Series; SAE International: Warrendale, PA, USA, 2005. [Google Scholar] [CrossRef]
  22. Zinn, M.; Roth, B.; Khatib, O.; Salisbury, J.K. A New Actuation Approach for Human Friendly Robot Design. Int. J. Robot. Res. 2004, 23, 379–398. [Google Scholar] [CrossRef]
  23. Psomopoulou, E.; Theodorakopoulos, A.; Doulgeri, Z.; Rovithakis, G.A. Prescribed Performance Tracking of a Variable Stiffness Actuated Robot. IEEE Trans. Control Syst. Technol. 2015, 23, 1914–1926. [Google Scholar] [CrossRef]
  24. Yan, Z.; Fabresse, L.; Laval, J.; Bouraqadi, N. Metrics for performance benchmarking of multi-robot exploration. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015. [Google Scholar] [CrossRef]
  25. Stoller, O.; Schindelholz, M.; Hunt, K.J. Robot-Assisted End-Effector-Based Stair Climbing for Cardiopulmonary Exercise Testing: Feasibility, Reliability, and Repeatability. PLoS ONE 2016, 11, e0148932. [Google Scholar] [CrossRef] [PubMed]
  26. Aly, A.; Griffiths, S.; Stramandinoli, F. Metrics and benchmarks in human-robot interaction: Recent advances in cognitive robotics. Cognit. Syst. Res. 2017, 43, 313–323. [Google Scholar] [CrossRef] [Green Version]
  27. Wyk, K.V.; Culleton, M.; Falco, J.; Kelly, K. Comparative Peg-in-Hole Testing of a Force-Based Manipulation Controlled Robotic Hand. IEEE Trans. Robot. 2018, 34, 542–549. [Google Scholar] [CrossRef]
  28. Emmi, L.; de Soto, M.G.; Pajares, G.; de Santos, P.G. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots. Sci. World J. 2014, 2014, 1–21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Grimstad, L.; From, P. The Thorvald II Agricultural Robotic System. Robotics 2017, 6, 24. [Google Scholar] [CrossRef]
  30. Ortiz, J.M.; Olivares, M. A Vision Based Navigation System for an Agricultural Field Robot. In Proceedings of the 2006 IEEE 3rd Latin American Robotics Symposium, Santiago, Chile, 26–27 October 2006. [Google Scholar] [CrossRef]
  31. Edan, Y. Design of an autonomous agricultural robot. Appl. Intell. 1995, 5, 41–50. [Google Scholar] [CrossRef]
  32. Ye, Y.; Wang, Z.; Jones, D.; He, L.; Taylor, M.; Hollinger, G.; Zhang, Q. Bin-Dog: A Robotic Platform for Bin Management in Orchards. Robotics 2017, 6, 12. [Google Scholar] [CrossRef]
  33. Wu, X.; Aravecchia, S.; Pradalier, C. Design and Implementation of Computer Vision based In-Row Weeding System. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar] [CrossRef]
  34. Vidoni, R.; Gallo, R.; Ristorto, G.; Carabin, G.; Mazzetto, F.; Scalera, L.; Gasparetto, A. ByeLab: An Agricultural Mobile Robot Prototype for Proximal Sensing and Precision Farming. In Proceedings of the 2017 ASME International Mechanical Engineering Congress and Exposition (IMECE), Volume 4A: Dynamics, Vibration, and Control, Tampa, FL, USA, 3–9 November 2017. [Google Scholar] [CrossRef]
  35. Bietresato, M.; Carabin, G.; D’Auria, D.; Gallo, R.; Ristorto, G.; Mazzetto, F.; Vidoni, R.; Gasparetto, A.; Scalera, L. A tracked mobile robotic lab for monitoring the plants volume and health. In Proceedings of the 2016 12th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), Auckland, New Zealand, 29–31 August 2016. [Google Scholar] [CrossRef]
  36. Amatya, S.; Karkee, M.; Zhang, Q.; Whiting, M.D. Automated Detection of Branch Shaking Locations for Robotic Cherry Harvesting Using Machine Vision. Robotics 2017, 6, 31. [Google Scholar] [CrossRef]
  37. Chien, T.L.; Guo, H.; Su, K.L.; Shiau, S.V. Develop a Multiple Interface Based Fire Fighting Robot. In Proceedings of the 2007 IEEE International Conference on Mechatronics, Changchun, China, 8–10 May 2007. [Google Scholar] [CrossRef]
  38. AlHaza, T.; Alsadoon, A.; Alhusinan, Z.; Jarwali, M.; Alsaif, K. New Concept for Indoor Fire Fighting Robot. Procedia-Soc. Behav. Sci. 2015, 195, 2343–2352. [Google Scholar] [CrossRef] [Green Version]
  39. Hassanein, A.; Elhawary, M.; Jaber, N.; El-Abd, M. An autonomous firefighting robot. In Proceedings of the 2015 International Conference on Advanced Robotics (ICAR), Istanbul, Turkey, 27–31 July 2015. [Google Scholar] [CrossRef]
  40. Derlukiewicz, D. Application of a Design and Construction Method Based on a Study of User Needs in the Prevention of Accidents Involving Operators of Demolition Robots. Appl. Sci. 2019, 9, 1500. [Google Scholar] [CrossRef]
  41. Lee, S.Y.; Lee, Y.S.; Park, B.S.; Lee, S.H.; Han, C.S. MFR (Multipurpose Field Robot) for installing construction materials. Auton. Robots 2007, 22, 265–280. [Google Scholar] [CrossRef]
  42. Yamada, H.; Tao, N.; DingXuan, Z. Construction Tele-robot System With Virtual Reality. In Proceedings of the 2008 IEEE Conference on Robotics, Automation and Mechatronics, Chengdu, China, 21–24 September 2008. [Google Scholar] [CrossRef]
  43. Zhao, J.; Gao, J.; Zhao, F.; Liu, Y. A Search-and-Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment. Sensors 2017, 17, 2426. [Google Scholar] [CrossRef] [PubMed]
  44. Casper, J.; Murphy, R. Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2003, 33, 367–385. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Nourbakhsh, I.; Sycara, K.; Koes, M.; Yong, M.; Lewis, M.; Burion, S. Human-Robot Teaming for Search and Rescue. IEEE Pervasive Comput. 2005, 4, 72–78. [Google Scholar] [CrossRef] [Green Version]
  46. Novák, P.; Kot, T.; Babjak, J.; Konečný, Z.; Moczulski, W.; López, Á.R. Implementation of Explosion Safety Regulations in Design of a Mobile Robot for Coal Mines. Appl. Sci. 2018, 8, 2300. [Google Scholar] [CrossRef]
  47. Lee, C.H.; Kim, S.H.; Kang, S.C.; Kim, M.S.; Kwak, Y.K. Double-track mobile robot for hazardous environment applications. Adv. Robot. 2003, 17, 447–459. [Google Scholar] [CrossRef] [Green Version]
  48. Gelhaus, F.E.; Roman, H.T. Robot applications in nuclear power plants. Prog. Nucl. Energy 1990, 23, 1–33. [Google Scholar] [CrossRef]
  49. Hirose, S.; Kato, K. Development of quadruped walking robot with the mission of mine detection and removal-proposal of shape-feedback master-slave arm. In Proceedings of the 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146), Leuven, Belgium, 20 May 1998. [Google Scholar] [CrossRef]
  50. Gardner, C.W.; Wentworth, R.; Treado, P.J.; Batavia, P.; Gilbert, G. Remote chemical biological and explosive agent detection using a robot-based Raman detector. In Unmanned Systems Technology X; Gerhart, G.R., Gage, D.W., Shoemaker, C.M., Eds.; SPIE: Bellingham, WA, USA, 2008. [Google Scholar] [CrossRef]
  51. Carnegie Mellon University. Field Robotics Center. Available online: https://www.ri.cmu.edu/robotics-groups/field-robotics-center/ (accessed on 30 May 2019).
  52. Santamaria-Navarro, À.; Teniente, E.H.; Morta, M.; Andrade-Cetto, J. Terrain Classification in Complex Three-dimensional Outdoor Environments. J. Field Robot. 2014, 32, 42–60. [Google Scholar] [CrossRef] [Green Version]
  53. Zhou, S.; Xi, J.; McDaniel, M.W.; Nishihata, T.; Salesses, P.; Iagnemma, K. Self-supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain. J. Field Robot. 2012, 29, 277–297. [Google Scholar] [CrossRef]
  54. Peynot, T.; Lui, S.T.; McAllister, R.; Fitch, R.; Sukkarieh, S. Learned Stochastic Mobility Prediction for Planning with Control Uncertainty on Unstructured Terrain. J. Field Robot. 2014, 31, 969–995. [Google Scholar] [CrossRef] [Green Version]
  55. Felius, M.; Beerling, M.L.; Buchanan, D.; Theunissen, B.; Koolmees, P.; Lenstra, J. On the History of Cattle Genetic Resources. Diversity 2014, 6, 705–750. [Google Scholar] [CrossRef] [Green Version]
  56. Duarte-Campos, L.; Wijnberg, K.; Hulscher, S. Estimating Annual Onshore Aeolian Sand Supply from the Intertidal Beach Using an Aggregated-Scale Transport Formula. J. Mar. Sci. Eng. 2018, 6, 127. [Google Scholar] [CrossRef]
Figure 1. Example field robotic platforms for (a,b) agriculture [28,29], (c) construction and demolition [40], (d) search-and-rescue [43], and (e) working in explosive or hazardous environments [46] (All figures originally published under CC-BY-4.0 license).
Figure 1. Example field robotic platforms for (a,b) agriculture [28,29], (c) construction and demolition [40], (d) search-and-rescue [43], and (e) working in explosive or hazardous environments [46] (All figures originally published under CC-BY-4.0 license).
Robotics 08 00083 g001
Figure 2. Example application space complexity for mode, obstacle, and terrain complexity. Other dimensions may include environmental complexity, weather conditions, GPS availability, etc.
Figure 2. Example application space complexity for mode, obstacle, and terrain complexity. Other dimensions may include environmental complexity, weather conditions, GPS availability, etc.
Robotics 08 00083 g002
Figure 3. Structure for developing the evaluation plan, beginning with definition of purpose and ending with a detailed procedure.
Figure 3. Structure for developing the evaluation plan, beginning with definition of purpose and ending with a detailed procedure.
Robotics 08 00083 g003
Figure 4. Illustration of some qualitative parameters for system evaluation, (a) path discontinuities, (b) heating and distance errors, (c) settling time, settling distance, and overshoot, and (d) phase lead and phase lag.
Figure 4. Illustration of some qualitative parameters for system evaluation, (a) path discontinuities, (b) heating and distance errors, (c) settling time, settling distance, and overshoot, and (d) phase lead and phase lag.
Robotics 08 00083 g004
Figure 5. Static, dynamic, and negative symbols, with flag marker for navigation and localization shown for further reference. Realistic examples of (a) large obstacle which may block communication and interfere with navigation (large tree), (b) simple static obstacle which does not interfere with navigation (driveway marker), (c) mobile dynamic obstacle (animal) [55], (d) stationary dynamic obstacle (lawn sprinkler), and (e) negative obstacle (road pothole).
Figure 5. Static, dynamic, and negative symbols, with flag marker for navigation and localization shown for further reference. Realistic examples of (a) large obstacle which may block communication and interfere with navigation (large tree), (b) simple static obstacle which does not interfere with navigation (driveway marker), (c) mobile dynamic obstacle (animal) [55], (d) stationary dynamic obstacle (lawn sprinkler), and (e) negative obstacle (road pothole).
Robotics 08 00083 g005
Figure 6. Example vehicle navigation response behaviors to a simple static obstacle.
Figure 6. Example vehicle navigation response behaviors to a simple static obstacle.
Robotics 08 00083 g006
Figure 7. Example vehicle navigation response behaviors to a simple dynamic obstacle.
Figure 7. Example vehicle navigation response behaviors to a simple dynamic obstacle.
Robotics 08 00083 g007
Figure 8. Suggested test course for Safety Evaluation 1 (SE1).
Figure 8. Suggested test course for Safety Evaluation 1 (SE1).
Robotics 08 00083 g008
Figure 9. Evaluation of (a) blind spots and obstacle distances, as well as (b) ranges and patterns for the instruments used to detect objects. Examples of 2D and 3D LIDAR, camera, and ultrasonic sensor ranges and patterns are given in (b).
Figure 9. Evaluation of (a) blind spots and obstacle distances, as well as (b) ranges and patterns for the instruments used to detect objects. Examples of 2D and 3D LIDAR, camera, and ultrasonic sensor ranges and patterns are given in (b).
Robotics 08 00083 g009
Figure 10. Communication and range evaluation course example layout.
Figure 10. Communication and range evaluation course example layout.
Robotics 08 00083 g010
Figure 11. Course layout for Behavior Evaluation 1, testing (a) straight-line, (b) curved turns, and (c) sharp turns for manual, SA/TO, and autonomous modes. The shown distances are for reference only and should be selected based on the design and requirements of the vehicle under evaluation.
Figure 11. Course layout for Behavior Evaluation 1, testing (a) straight-line, (b) curved turns, and (c) sharp turns for manual, SA/TO, and autonomous modes. The shown distances are for reference only and should be selected based on the design and requirements of the vehicle under evaluation.
Robotics 08 00083 g011
Figure 12. Behavior Test Course 2 layout with details of the flag/marker placements. The distance values given should be taken only as examples and should be adjusted according to the requirements and testing plan for the vehicle undergoing evaluation.
Figure 12. Behavior Test Course 2 layout with details of the flag/marker placements. The distance values given should be taken only as examples and should be adjusted according to the requirements and testing plan for the vehicle undergoing evaluation.
Robotics 08 00083 g012
Figure 13. Behavior Test Course 3 layout with details of the flag/marker placements. The distance values given should be taken only as examples and should be adjusted according to the requirements and testing plan for the vehicle undergoing evaluation.
Figure 13. Behavior Test Course 3 layout with details of the flag/marker placements. The distance values given should be taken only as examples and should be adjusted according to the requirements and testing plan for the vehicle undergoing evaluation.
Robotics 08 00083 g013
Figure 14. Behavior Test Course 4 layout with details of the flag/marker placements. The distance values given should be taken only as examples and should be adjusted according to the requirements and testing plan for the vehicle undergoing evaluation.
Figure 14. Behavior Test Course 4 layout with details of the flag/marker placements. The distance values given should be taken only as examples and should be adjusted according to the requirements and testing plan for the vehicle undergoing evaluation.
Robotics 08 00083 g014
Figure 15. Behavior Test Course 5 layout (assuming no obstacles are used).
Figure 15. Behavior Test Course 5 layout (assuming no obstacles are used).
Robotics 08 00083 g015
Figure 16. Behavior Test Course 1 layouts with example objects/obstacles where the objects/obstacles are (a) static, (b) dynamic, and (c) negative. The objects/obstacles may be diverse or may be several of the same type; one or several may be used for the tests.
Figure 16. Behavior Test Course 1 layouts with example objects/obstacles where the objects/obstacles are (a) static, (b) dynamic, and (c) negative. The objects/obstacles may be diverse or may be several of the same type; one or several may be used for the tests.
Robotics 08 00083 g016
Figure 17. Behavior Test Course 2 layouts with example objects/obstacles where the objects/obstacles are (a) static, (b) dynamic, and (c) negative. The objects/obstacles may be diverse or may be several of the same type; one or several may be used for the tests.
Figure 17. Behavior Test Course 2 layouts with example objects/obstacles where the objects/obstacles are (a) static, (b) dynamic, and (c) negative. The objects/obstacles may be diverse or may be several of the same type; one or several may be used for the tests.
Robotics 08 00083 g017
Figure 18. Behavior Test Course 3 layouts with example objects/obstacles where the objects/obstacles are (a) static, (b) dynamic, and (c) negative. The objects/obstacles may be diverse or may be several of the same type; one or several may be used for the tests.
Figure 18. Behavior Test Course 3 layouts with example objects/obstacles where the objects/obstacles are (a) static, (b) dynamic, and (c) negative. The objects/obstacles may be diverse or may be several of the same type; one or several may be used for the tests.
Robotics 08 00083 g018
Figure 19. Behavior Test Course 4 layouts with example objects/obstacles where the objects/obstacles are (a) static, (b) dynamic, and (c) negative. The objects/obstacles may be diverse or may be several of the same type; one or several may be used for the tests.
Figure 19. Behavior Test Course 4 layouts with example objects/obstacles where the objects/obstacles are (a) static, (b) dynamic, and (c) negative. The objects/obstacles may be diverse or may be several of the same type; one or several may be used for the tests.
Robotics 08 00083 g019
Figure 20. Example terrain complexity for (a) paved asphalt, (b) flat turf, (c) dry dirt road, (d) sand [56], (e) gravel road, (f) tall grass, (g) forest, and (h) an urban environment.
Figure 20. Example terrain complexity for (a) paved asphalt, (b) flat turf, (c) dry dirt road, (d) sand [56], (e) gravel road, (f) tall grass, (g) forest, and (h) an urban environment.
Robotics 08 00083 g020
Table 1. Example behavior testing requirements table for manual operation (MO), semi-autonomous and tele-operation (SA/TO), and autonomous operation (AO). Requirements are system-dependent and the designation of requirements and goals should be developed by the system stakeholders on a project-by-project basis. Blank cells indicate requirements not relevant or not included in the function or testing of the system under development.
Table 1. Example behavior testing requirements table for manual operation (MO), semi-autonomous and tele-operation (SA/TO), and autonomous operation (AO). Requirements are system-dependent and the designation of requirements and goals should be developed by the system stakeholders on a project-by-project basis. Blank cells indicate requirements not relevant or not included in the function or testing of the system under development.
Example System Behavior for EvaluationControl Mode
MOSA/TOAO
Basic navigationRequiredRequiredGoal
Route planning-GoalGoal
Status communicationRequiredRequiredRequired
Observe and report near-vehicle status-GoalRequired
Detect and evaluate the state of terrain features--Required
Detect and report nearby human activityGoal-Goal
Search area-RequiredGoal
Avoid areasRequiredRequiredRequired
Avoid and negotiate static and dynamic obstacles-GoalRequired
Table 2. Example behavior modes and functions in a field robotic system.
Table 2. Example behavior modes and functions in a field robotic system.
LevelModes and FunctionsLevelModes and Functions
Vehicle designSafetyRoboticsSafety
Stability Sensors
Braking Software
Turning radius Communication
Actuation Tele-operation
Boom operation Localization
Ground clearance Navigation
Terrain tracking
Mission accomplishment
Obstacle detection
Obstacle avoidance
Table 3. Example obstacle log table examining the GPS coordinates, nature (natural or artificial, where artificial obstacles were placed by the evaluators as part of the test course), the mode (static or dynamic), navigational interference (interfering/no-interfering), and whether the obstacle or its effect was expected or unexpected (E/UE) before the tests began.
Table 3. Example obstacle log table examining the GPS coordinates, nature (natural or artificial, where artificial obstacles were placed by the evaluators as part of the test course), the mode (static or dynamic), navigational interference (interfering/no-interfering), and whether the obstacle or its effect was expected or unexpected (E/UE) before the tests began.
ObstacleGPS CoordNatureModeNav InterferenceE/UE
1Large treeXX.X, YY.YNaturalStaticInterferenceExpected
2Large rock (near ground)XX.X, YY.YNaturalStaticNon-interferenceExpected
3BasketballXX.X, YY.YArtificialStaticNon-interferenceExpected
4Large farm animalXX.X, YY.YNaturalDynamicNon-interferenceUnexpected
5Area of tall grassXX.X, YY.YNaturalStaticNon-interferenceUnexpected
6Adversarial robotXX.X, YY.YArtificialDynamicNon-interferenceUnexpected
7Car (parked)XX.X, YY.YArtificialStaticInterferenceExpected
8Car (driving normally)XX.X, YY.YArtificialDynamicInterferenceExpected
9Small pondXX.X, YY.YNaturalStaticNon-interferenceExpected
10Large buildingXX.X, YY.YNaturalStaticInterferenceExpected
Table 4. Major top-level functions and example important system parameters to be evaluated or adjusted during testing.
Table 4. Major top-level functions and example important system parameters to be evaluated or adjusted during testing.
FunctionExample Qualitative Parameter
Safety1. Stopping distance relative to obstacle detection range, braking symmetry, and operation mode
2. Control stop and remote kill switch delay
3. Mode of operation of the vehicle once remotely stopped
4. Maximum acceleration and deceleration profiles
5. Range of obstacle detection and location of blind spots
Communication1. Effect of distance and communication loss from the user interface on the system behaviors under manual, SA/TO, and autonomous navigation
2. Bandwidth limitations throughout the range of operation
3. Effects of varying complex terrain (foliage, etc.) and weather
4. Line of sight requirements and communication range
Manual control1. Operator controllability
2. Time lag of signal (phase lag) from the operator to the vehicle
3. Communication range and effect radio transition
4. Operator use and control of vehicle stability issues at high speed, including roll-over prevention
5. Handling quality of the vehicle under manual control
6. Accuracy of the vehicle position and path under manual control
7. Time to accomplish mission
8. Steering system symmetry and controllability
9. The effect of vehicle speed on handling
10. Obstacle detection and behavior arbitration
11. Time to accomplish the mission compared to scenario with no obstacles
12. Actuator Fatigue—number of actuations required to accomplish the mission
13. The frequency and amplitude of steering actuation signals
14. Brake—when on or off, time and side engaged
15. Speed—frequency, direction and amplitude of velocity actuation signals
SA/TO1. Operator controllability in cooperation with the vehicle system
2. Time lag of signal from the operator to the vehicle and the control system
3. Communication range and effect radio transition
4. Operator/controller handling of vehicle stability issues at high speed, including roll-over prevention
5. Handling quality of the vehicle under SA/TO
6. Accuracy of the vehicle position and path under SA/TO
7. Time to accomplish mission
8. Steering system symmetry and controllability
9. The effect of vehicle speed on handling and control
10. Obstacle detection and negotiation
11. Time to accomplish the mission compared to scenario with no obstacles
12. Actuator Fatigue—number of actuations required to accomplish the mission
13. The frequency and amplitude of steering actuation signals
14. Brake—when on or off, time and side engaged
15. Speed—frequency, direction and amplitude of velocity actuation signals
Autonomous navigation1. Controllability—how well the operator or artificial intelligence can control the vehicle to accomplish the mission
2. Communication range and radio transition—the effects on autonomous navigation due to vehicle range at maximum range and when transitioning between radios, waypoints, or GPS-rich areas
3. Operator/controller handling of vehicle stability issues at high speed, including roll-over prevention
4. Handling—cornering ability of the vehicle under navigation controls
5. Accuracy—how well the vehicle follows a path under an autonomous mode
6. Time to accomplish mission
7. Effects of localization sensor loss—Effects of the loss of use of sensors used to aid in localization
8. Effects of discontinuities in path plan
9. Settling time and distance—the amount of time and distance it takes a vehicle to resume adequate path tracking once a landmark has been attained or path tracking segment completed
10. Distance error—How far the vehicle has departed from a path segment or line/arc connecting waypoints or other landmarks (physical or GPS coordinates)
11. Maximum distance error—absolute value of the maximum distance error the system experienced during the evaluation course
12. Heading error—How the direction of the vehicle has differed from the direction of a corresponding path segment or the line/arc between waypoints or other physical or digital landmarks
13. Maximum heading error—absolute value of the maximum heading error the system experienced during the evaluation course
14. Minimum path curvature—the minimum radius or change in direction of waypoint segments that the vehicle can adequately track
15. Overshoot/undershoot—the distance the vehicle goes beyond the next waypoint or path segment when transitioning through path segments or between other landmarks
16. Phase lead/lag—the degree the vehicle compensates or fails to compensate for future/current changes in segment or arc reference
17. Steering system symmetry
18. Obstacle detection and behavior arbitration—the effects on system behavior due to the presence of obstacles around the platform or when other behaviors are in conflict
19. Time—time to accomplish the mission compared to scenario with no obstacles
20. Actuator fatigue—number of actuation to accomplish the mission
21. Steering—frequency and amplitude of steering actuation signals
22. Brake—frequency and amplitude of brake actuation signals
23. Throttle—frequency and amplitude of throttle actuation signals
Table 5. Safety Evaluation 1 setup table.
Table 5. Safety Evaluation 1 setup table.
Test: Safety Evaluation 1
PurposeThe purpose of this evaluation is to determine the braking response of the vehicle to an immediate stop command (i.e., activation of the remote kill switch). The response characteristics should be evaluated from the stopping distance and deceleration profile based on obstacle detection range, braking symmetry and the mode of operation of the vehicle once stopped. Additionally, the maximum acceleration and deceleration profile can be obtained. The course will be focus on SA/TO and autonomous modes; manual operation is not necessary to test, as a remote kill switch is not necessary when the operator is in complete control of the system during operation.
RequirementsThe vehicle shall effectively stop at design speed within a specified distance (e.g., 2 m), as well as within a specified distance (e.g., 3 m) at the maximum safe speed of the vehicle
2. The intelligent (autonomous) vehicle shall be in a stable mode once at a complete e-stop is engaged (i.e., 100% brake actuation). The stable stopped vehicle in an automated mode should have the brake engaged.
Variables1. GPS coordinates of obstacle and vehicle
2. Vehicle heading
3. Vehicle speed
Specifications1. Vehicle will be accelerated to design speed and then at maximum rated speed for each mode of operation
2. The course will be run in the following sequence: tele-operation and navigation
3. The course should be laid out as demonstrated in Figure 8; the distanced for each phase of the evaluation are adjustable based on the design of the system under evaluation
4. Markers shall be used to mark the course and the point where the brakes are applied via E-stop and control stop
5. The course is predicted to take approximately 10 min to setup and 10 min to perform one evaluation
Table 6. Safety Evaluation 2 setup table.
Table 6. Safety Evaluation 2 setup table.
Test: Safety Evaluation 2
PurposeThe purpose of this course is to evaluate the range of detection for an autonomous vehicle and directions from which signals may not be detected. It is assumed that this test will not be needed for manual (and most SA/TO) modes, as the system will be in continuous communication with the operator and its location will always be known in real time.
RequirementsThe intelligent vehicle shall be capable of being detected by an observer at a minimum radius established by the stakeholders; this distance will depend on the terrain complexity and expected weather conditions. Warning signals shall be both audio and visual.
VariablesDistance from the vehicle where warning signals are detectable (visual, audio)
Specifications1. Vehicle will be stationary
2. The measurements should be taken from the front, sides and rear of the vehicle
3. Predicted time for one test will be 5 min to set up and 10 min to run the test
Table 7. Safety Evaluation 3 setup table.
Table 7. Safety Evaluation 3 setup table.
Test: Safety Evaluation 3
PurposeThe purpose of this course is to evaluate obstacle detection for an autonomous vehicle including areas where objects may not be detected
RequirementsThe intelligent vehicle shall be capable of detecting objects/obstacles around the system of a set minimum size and height above the ground via instruments (camera, LIDAR, etc.) at a specified minimum distance from the vehicle
VariablesDistance and direction from the vehicle where obstacles are detected
Specifications1. Vehicle will be stationary
2. The measurements should be taken from the front, sides and rear of the vehicle (in terms of R 1 , , R n as demonstrated in Figure 9)
3. A test will be required for each of the detection methods used; Figure 9b shows example expected patterns for a camera and LIDAR. The exact patterns will vary depending on the instruments and vehicle design, but great care should be taken to ensure their range and reliability meet or exceed the requirements for the system.
3. Setup time is expected to be 5 min and one evaluation is expected to require 2 h to complete
Table 8. Communications Evaluation setup table.
Table 8. Communications Evaluation setup table.
Test: Communications Evaluation
PurposeThe purpose of this course is to determine the limitations of the communication system between the user interface and the vehicle. The effect of variations of the vehicle in terms of distance and transitions between communication systems will be explored, as well as the effects on performance based on vehicle positioning close to the maximum range. With the introduction of terrain complexity, the effective communication range and line of sight requirements can also be affected and will be examined. The evaluations will be conducted in manual mode, SA/TO, and autonomous navigation modes.
Requirements1. The vehicle shall be capable of effectively operating behaviors throughout the complete range of the communication with graceful degradation based on the loss of bandwidth
2. The vehicle shall exhibit appropriate functionality throughout the communication range
3. Traversing the communication transition region shall only effect the streaming video flow
4. The performance of the system in performing tele-operation and autonomous navigation should degrade gracefully as line of site/ maximum communication range is attained
5. In the event of reaching maximum range, the vehicle should stop
Variables1. GPS coordinate information – vehicle and user interface
2. Vehicle mode
3. Video rate
4. Bandwidth usage requirements for operation (video, communications, etc.)
5. Lag in operator control
Specifications1. Vehicle will be operated in manual, SA/TO, and autonomous navigation modes in separate evaluations
2. The course will be as driven by the operator on the user interface with the intent of reaching maximum communication range
3. Under the additions of terrain complexity, efforts should be initiated to determine line-of-sight limitations
4. Setup is expected to require 5 min and each evaluation is expected to require about 45 min
Table 9. Behavior Evaluation 1 setup table.
Table 9. Behavior Evaluation 1 setup table.
Test: Behavior Evaluation 1
PurposeThe purpose of this course is to evaluate the manual mode, SA/TO, and autonomous navigation behaviors performing basic tasks, including driving a straight line and handling accuracy at a variety of turning radii. The response characteristics will be evaluated for trajectory following accuracy, the required steering control effort, controllability, time to accomplish the mission and level of actuation for the controllers. Additionally, the lag of the controller and handling characteristics will be analyzed.
Requirements1. The time lag between the operator control unit generated signal should be within the time specified in the requirements document
2. Vehicle trajectory drift shall be below the maximum specified distance
3. The vehicle will not enter an unstable mode of operation
Variables1. The stakeholder-determined basic performance requirements, as described in Table 4
2. Timing of operator control input compared to timing of actuator (measure of communication delay)
3. Vehicle speed
4. Vehicle specification information
5. Vehicle turning radius
Specifications1. Vehicle will be traveling at design operating speed
2. Two operators will be required for this evaluation
3. In SA/TO and autonomous navigation modes, the vehicle will be monitored and controlled by a remote operator on the user interface and locally by the second operator
4. The course should be laid out as demonstrated in Figure 11
5. The first run for all modes will be on a straight line
6. Radius changes for left and right turns of the course will initiate at the vehicle minimum turn radius. Subsequent test runs will occur starting at the vehicle minimum turn radius and varying the minimum test radius incrementally in both left and right directions until the desired radius is attained
7. Markers should be placed 2 m on either side of the path where changes in segments occur and in the middle of arcs for SA/TO evaluations.
8. Course setup is expected to require 30 min, with 10 min run time for each evaluation
Table 10. Behavior Evaluation 2 setup table.
Table 10. Behavior Evaluation 2 setup table.
Test: Behavior Evaluation 2
PurposeThe purpose of this course is to evaluate the manual mode, tele-operation, and autonomous navigation behaviors to minimum level disturbances, drift, disturbance rejection, and stability evaluations under maximal perturbations, tracking accuracy, the effects of phase lag present in the controller, settling time and control system reliability. The response characteristics will be evaluated for trajectory following error, and the required steering control effort.
Requirements1. The intelligent vehicle shall effectively traverse the course at design speed as well as at the maximum safe speed of the vehicle available under autonomous mode. Effective traverse implies a trajectory following error of less than a maximum distance specified in the requirements document and a maximum specified orientation error.
2. The vehicle will have a settling distance less than the specified distance at design speed, as well as at the maximum safe speed of the vehicle available under autonomous mode
3. Autonomous vehicle trajectory drift shall be below the maximum distance specified in the requirements document
4. The vehicle will not enter an unstable mode of operation
Variables1. The stakeholder-determined basic performance requirements, as described in Table 4
2. GPS coordinates of path and vehicle
3. Vehicle heading
4. Reference segments for controller response
5. All planned trajectory information in GPS coordinates
6. Actuator command signal from operator or high level control
7. Actuator position/angle feedback
8. Actuator lag
9. Vehicle speed
10. Vehicle specification information
11. Turning radius
Specifications1. Vehicle will be traveling at standard operating speed
2. Two operators will be required for this evaluation
3. In manual, SA/TO, and autonomous modes, the vehicle will be controlled and/or monitored by a remote operator through a user interface and locally by the vehicle supervisor
4. The course should be laid out as demonstrated in Figure 12 where t represents the vehicle-specific minimum turning radius
5. Markers should be placed 2 m on either side of the path where changes in segments occur to help guide navigation
6. Expected run time for each case will be 30 min, with one hour expected for setting up the course
Table 11. Behavior Evaluation 3 setup table.
Table 11. Behavior Evaluation 3 setup table.
Test: Behavior Evaluation 3
PurposeThe purpose of this course is to evaluate the manual, SA/TO, and autonomous navigation behaviors response to maximum disturbances, determination of handling issues and the effects of phase lag present in the controller. The response characteristics will be evaluated the required steering control effort from the operator.
Requirements1. The intelligent vehicle shall effectively traverse the course at design speed as well as at the maximum safe speed of the vehicle available under autonomous mode. Effective traverse implies trajectory following and orientation errors of less than the maximum distance specified in the system requirements with minimal high frequency oscillation.
2. The vehicle will not enter an unstable mode of operation
Variables1. The stakeholder-determined basic performance requirements, as described in Table 4
2. GPS coordinates of path and vehicle
3. Vehicle heading
4. Reference segments for controller response
5. All planned trajectory information in GPS coordinates
6. Actuator command signal from operator or high level control
7. Actuator position/ angle feedback
8. Timing of operator control input compared to timing of actuators
9. Vehicle specification information
10. Vehicle speed
11. Turning radius
Specifications1. Vehicle will be traveling at standard operating speed
2. Two people will be required for this evaluation
3. In tele-operation and autonomous navigation behaviors, the vehicle will be controlled by a remote operator on the User Interface
4. The course should be laid out as demonstrated in Figure 13 where t represents the vehicle-specific minimum turning radius
5. Markers should be placed 2 m on either side of the path where changes in segments occur to help guide tele-operation
6. Time investment for the test is anticipated to be one hour for the course setup and a half hour for each evaluation
Table 12. Behavior Evaluation 4 setup table.
Table 12. Behavior Evaluation 4 setup table.
PurposeThe purpose of this course is to evaluate the steering response to maximum level disturbances, disturbance rejection, stability evaluations under maximal perturbations, tracking accuracy and the effects of phase lag present in the controller. The response characteristics will be evaluated for trajectory following error, and the required steering control effort. Trajectory drift will be evaluated in terms of orientation, distance, and course shift through multiple passes.
Requirements1. The intelligent vehicle shall effectively traverse the course at design speed as well as at the maximum safe speed of the vehicle available under autonomous mode. Effective traverse implies a trajectory following and orientation error within the tolerance specified in the system requirements document with minimal high-frequency oscillation.
2. The vehicle will not enter an unstable mode of operation
Variables1. The stakeholder-determined basic performance requirements, as described in Table 4
2. GPS coordinates of path and vehicle
3. Vehicle heading
4. Reference segments for controller response
5. All planned trajectory information in GPS coordinates
6. Actuator command signal from operator or high level control
7. Actuator position/ angle feedback
8. Timing of operator control input compared to timing of actuators
9. Vehicle speed
10. Vehicle specification information
11. Turning radius
Specifications1. Vehicle will be traveling at standard operating speed
2. This evaluation will require two operators, one to control the vehicle remotely and one to escort it in case of problems
3. In tele-operation and autonomous modes, the vehicle will be controlled by a remote operator on the user interface
4. The course should be laid out as demonstrated in Figure 14 where t represents the vehicle-specific minimum turning radius
5. Markers should be emplaced 2 m on either side of the path where changes in segments occur to help guide navigation
6. Course setup is expected to take 10 min per iteration, followed by a 20 min evaluation period
Table 13. Behavior Evaluation 5 setup table.
Table 13. Behavior Evaluation 5 setup table.
Test: Behavior Evaluation 5
PurposeThe purpose of this course is to evaluate the steering controller response to varying disturbances in the form of varying trajectory curvature. The response characteristics will be evaluated for trajectory following error, and the required steering control effort. Trajectory drift will be evaluated in terms of orientation, distance and course shift through multiple passes.
Requirements1. The intelligent vehicle shall effectively traverse the course at design speed as well as at the maximum safe speed of the vehicle available under autonomous mode. Effective traverse implies a trajectory following error of less than a maximum distance of meters and a maximum orientation error of degrees under all conditions with minimal high frequency oscillation.
2. The vehicle will not enter an unstable mode of operation
Variables1. The stakeholder-determined basic performance requirements, as described in Table 4
2. GPS coordinates of path and vehicle
3. Vehicle heading
4. Reference segments for controller response
5. All planned trajectory information in GPS coordinates
6. Actuator command signal from operator or high level control
7. Actuator position/ angle feedback
8. Timing of operator control input compared to timing of actuators
9. Vehicle speed
10. Vehicle specification information
11. Turning radius
Specifications1. Vehicle will be traveling at standard operating speed
2. Two people will be required for this evaluation
3. The vehicle will be controlled by a remote operator on the User Interface
4. The course should be laid out as demonstrated in the figure on the following page
5. Course setup time is expected to take 15 min per iteration and testing is expected to take an additional 15–20 min
Table 14. Example terrain parameters table for a gravel road environment.
Table 14. Example terrain parameters table for a gravel road environment.
Terrain: Gravel Road
Ground slope ϕ ϕ 20
Gravel average size x x 20 mm
Standing water depth d w d w 45 mm
Drive-over obstacle height h d h d 50 mm
Snow depth d s n d s n 30  mm
Negative obstacle depth d n d n 75 mm
Negative obstacle diameter y y 200 mm
Temperature T0 C T 40 C
Relative humidity H R 10 % H R 100 %
Mud depth d m d m 10 mm
Sand depth d s d d s d 8 mm

Share and Cite

MDPI and ACS Style

Norris, W.R.; Patterson, A.E. System-Level Testing and Evaluation Plan for Field Robots: A Tutorial with Test Course Layouts. Robotics 2019, 8, 83. https://doi.org/10.3390/robotics8040083

AMA Style

Norris WR, Patterson AE. System-Level Testing and Evaluation Plan for Field Robots: A Tutorial with Test Course Layouts. Robotics. 2019; 8(4):83. https://doi.org/10.3390/robotics8040083

Chicago/Turabian Style

Norris, William R., and Albert E. Patterson. 2019. "System-Level Testing and Evaluation Plan for Field Robots: A Tutorial with Test Course Layouts" Robotics 8, no. 4: 83. https://doi.org/10.3390/robotics8040083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop