^{1}

^{1}

^{2}

^{1}

^{1}

^{*}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

Trustworthy sensors are key elements regarding current road safety applications. In recent years, advances in information technologies have lead to more intelligent and complex applications which are able to deal with a large variety of situations. These new applications are known as ADAS (Advance Driver Assistance Systems). In order to provide reliable ADAS applications, one of the principal tasks involved is obstacle detection, especially for those obstacles that represent the most vulnerable road users: pedestrians. In terms of quantifying the accident rate depending on the type of transportation, pedestrians, who account for 41% of the total number of victims, represent the largest number of traffic accident victims in terms of deaths. It is well known that human errors are the cause of most traffic accidents. The two main errors are drivers’ inattention and wrong driving decisions. Governments are trying to reduce accidents with infrastructure improvements and educational campaigns, but they cannot be completely eliminated due to the human factor. That is why ADAS can reduce the number, danger and severity of traffic accidents. Several ADAS, which nowadays are being researched for Intelligent Vehicles, are based on Artificial Intelligence and Robotics technologies.

On-board perception systems are essential to estimate the degree of safety in a given situation and to allow the control system to make a suitable decision. Traffic safety research, developed around the world, shows that it is not possible to use only one sensor to get all relevant information from the road environment, making data fusion from different kinds of sensors necessary.

In this article a novel fusion method is proposed. The method combines the information provided by a 2D laser range finder and a stereo camera to detect pedestrians in urban environments. By combining both sensors, limitations inherent to each one can be overcome. Laser range sensors provide a reliable distance to the closest obstacles, thus giving trustable information of the surrounding, but this information is limited due to the low amount of data provided by the device and occlusions. With this lack of information, estimation of the type of obstacles found in a road environment is a tough task. On the other hand, data provided by computer vision systems have more information but less structured. This information can be very useful when trying o estimate the type of obstacle

The objectives that are addressed are:

Identification of pedestrians and tracking of their trajectories. The focus is to detect the objects that are in the environment, classify the pedestrians and track them modeling their trajectory and identify possible collisions.

Installation of an intelligent system in the vehicle that tells the driver of potential dangers.

The tools that are going to be used are:

The sensors that allow for the acquisition of data from the environment.

Statistical inference or decision making to perform a probability calculation on the prediction of the trajectories.

Algorithms that will match the measurements and the predictions so as to classify the objects and determine their exact location, and send alarms in case the object is too close to the vehicle.

Statistics show that more than a half of accidents resulting in fatal victims happened in urban environments, in other words, where the active safety vehicle’s systems, e.g., ABS, ESP, have lower influence. Because of that, ADAS for front-side collisions, pedestrian run-over or automatic emergency braking are attracting an increasing interest. In addition, systems aiming to protect the most vulnerable users of these infrastructures such as pedestrians, cyclists,

Sensor data fusion [

There are some constraints related to perception systems design that have to do with coverage, precision, uncertainty,

Another aspect to consider is the sensorial imprecision, inherent to the nature of sensors. Measurements obtained by each sensor are limited by the precision of the sensor used. The higher the number of sensors is, the higher the level of precision that is achieved in data fusion [

There is a new problem when designing perception systems to be applied to Intelligent Transportation Systems – uncertainty – which depends on the object observed instead of the sensor. It is produced when some special characteristics (such as occlusions) may appear when the sensor is not able to measure all attributes relevant to perception or when the observation is ambiguous [

Fusion methods are typically divided into two types according to the level in which fusion is performed: low level fusion, so called centralized fusion schemes [

Low level fusion combines information from both sensors creating a new set of data with more information, but problems related to data association arise. Low level approaches that take advantage of statistical knowledge [

High level fusion schemes allow fusion in an easier and more scalable way; new sensors can be added more easily but with less information to do the classification. They can be differentiated in track based fusion and cell based fusion schemes. The first one tries to associate the different objects found in each sensor [

Other works related to fusion schemes take advantage of laser scanner trustworthiness to select regions of interest where vision-based systems try to detect pedestrians [

IvvI (Intelligent Vehicle based on Visual Information,

Research results are being currently implemented in a Nissan Note (

The laser and the stereo vision system are the input sensors to the fusion based tracking algorithm, which is the main line of research of this article. The algorithm provides information about the environment to the driver. This is done through a monitor (

The intelligent tracking algorithm (

The individual-sensor data is to be jointly fused by the proper calibration and coordination algorithms. It is necessary that each and every sensor perform a measurement exactly at the same moment in time so that a composite fused measurement might be obtained. The fusion step has to account for an absence of measurements by any or all of the devices, so that the trajectories and the stability of the time movements are statistically tracked. Raw data is recorded at time t in multiple dimensions [in this case, two (x_{t}, y_{t})]; then data is converted into movements that the pedestrian has performed in a time increment l (^{1}_{t}^{l}_{t}

Two sets of statistical inference procedures are to be performed. The first procedure is the analysis of the stability of the displacements, that is, the analysis of the consistency or the homogeneity of the current measurement with the previous movements of the same time increment. The stability hypothesis is usually tested using confidence intervals or validation gates in one dimension and simultaneous confidence regions in multiple dimensions [

The second procedure is the prediction of the motion of the pedestrian or his/her location at a particular future time. Based on the current location and the stable movements for different time increments, it is possible to set confidence regions for the location at future times t+l [

The matching algorithm confronts then the fused stable measurements for different time increments with all the location predictions that have been made in previous moments of time. If within the validation gates, that is, with the occurrence of proper multiple matches, the known pedestrians are liable to be continuously tracked. If no match is achieved, new pedestrians may be available for tracking.

After each successful classification or tracking stage, the predictions must be updated, because changes in directions or velocity may very likely occur. By performing moving predictions, that is, taking into account only measurements for past short time intervals, these changes will not negatively affect the predictions and ruin the tracking of the proper trajectories.

What follows is a detailed explanation of each of the stages of the algorithm. Section 5 explains the laser subsystem including its detection and classification stages. Section 6 details the computer vision system and its pedestrian identification step. Section 7 is then used to address the intelligent tracking algorithm, which is tested in Section 8 with real experiments in an urban outdoor environment. Section 9 is finally used to present the conclusions and future work.

The aim of the laser subsystems is to detect pedestrians based on the data received from the laser scanner device. The laser, a SICK LMS 291-S05, has a measurement range of 80 meters and a frequency up to 19 frames per second. The detection process is composed of two stages. In the first stage, the data is received and obstacles’ shapes are estimated. A second stage performs obstacle classification according to the previously estimated shape. In the present research, pedestrian classification is performed by searching through specific patterns related to leg positions.

The laser scanner provides a fixed amount of points that represents the distance to the obstacles for a given angle, from the coordinate origin situated in the bumper of the vehicle. This measurement is taken from a single laser that performs a 180° rotation. Thus, there is a time difference between each distance measured. Due to the vehicle movement and laser scanner rotation there is a variation along time included in the measures, therefore vehicle egomotion correction is mandatory before processing the data; this is done thanks to an on-board GPS-IMU system. The resulting points are joined according to the Euclidean distance among them (

After the clustering algorithm, polylines are created, which join the points contained within segments. These polylines give information about shape and distance of the obstacle to the vehicle.

Classification is performed according to obstacles’ shape [

Observation showed that most of the patterns shared a common feature, consisting of two different 90 degrees angles. This pattern was checked under different conditions and movements including test for standing pedestrians facing the laser and lateral standing pedestrians. Regarding to lateral standing pedestrians test showed that the pattern given by the laser includes the two mentioned angles by getting the whole shape of a leg. Taking advantage of such behavior a static model was created.

The process followed to match the found pattern, including rotation, consists on a first segmentation according to obstacle’s size and a final matching based on polylines’ shape. Segmentation computes the size of the polyline and checks that the detected obstacle has a size proportional to a human being. An obstacle that fulfills the size requirements is marked as candidate to be a pedestrian. An additional stage compares it with the model. The comparison stage links every two consecutive angles (

A single similarity score is computed for each of the two angles separately by comparing their value with the ideal

If the case arises where more than three polylines are present, the algorithm is applied to every pair of consecutive angles and those with the highest values are chosen as the polyline similarity value. A threshold is used to classify the obstacle as a pedestrian.

The purpose of this subsystem is also to detect and classify pedestrians; the detection range is 30 meters and a frequency up to 10 frames per second. In order to have depth information in computer vision it is necessary to set two cameras: in the IvvI is a stereo Bumblebee camera by Pointgrey. This system automatically performs the necessary rectification step [

Once the two images are correctly rectified, our proposed algorithm develops the dense disparity map and “u-v disparity” [

The disparity map represents the depth W of every image point. The depth is calculated as follows:
_{R},v_{R}) and (u_{L},v_{L}) are the projection in the camera planes for the right and left cameras respectively of the point P = (U,V,W)^{T} of the world.

For this calculation to be valid, the two image planes must be coplanar, their optical axes must be parallel and their intrinsic parameters must be the same. It is therefore necessary to find the correspondence between points of the right and left images to determinate the disparity d (known as the stereo matching problem), using the following rectification:

There are several possible solutions to this stereo matching problem in order to obtain the dense disparity map. Our algorithm follows the taxonomy presented by Scharstein and Szeliski in [

Matching cost computation: Although there are more accurate cost functions [

Cost (support) aggregation: There are different kinds of support regions, and their choice influences in the resulting disparity map [

Disparity computation: There are mainly two methods for disparity computation: local [

Disparity refinement: This step tries to reduce the possible errors in the disparity map, which are usually produced in areas where texture does not exist, in areas near depth discontinuity boundaries [

Once the disparity map has been generated, it is possible to obtain the “u-v disparity”. As there is a univocal relationship between disparity and distance, the v-disparity expresses the histogram over the disparity values for every image row (v coordinate), while the u-disparity does the same but for every column (u coordinate). In short, the u-disparity is built by accumulating the pixels of each column with the same (u, d) and the v-disparity by accumulating the pixels of each row which the same (v, d). An example is illustrated in

If it is assumed that obstacles have planar surfaces, every one appears in the u-disparity image as pixels whose intensity is the height of that obstacle. As the u-disparity image dimensions are the width of the original image and the height is the disparity range, those pixels have the same horizontal coordinate than the obstacle in the disparity map and the vertical coordinate is the disparity value of the obstacle. Regarding v-disparity, as its image dimensions are the disparity range and the height of the original image, the obstacles appear as vertical lines in its corresponding disparity value [

The main goal of this system is to determine the regions of interest (ROI), which will be later on used to conclude if the obstacles are pedestrians or not. In order to do that, the road profile is estimated by means of the v-disparity [

The first step is a preliminary detection over u-disparity. This task consists in thresholding the u-disparity image to detect obstacles which have a height greater than a threshold. This way the “thresholded u-disparity” is constructed at the bottom of

In the disparity image, the subimages defined by the horizontal obstacle position and width, red squares in

Finally, a second blob analysis is performed to determine obstacles features, area and position, on the thresholded disparity map,

The obstacles’ localization in world coordinates (U, V) can be obtained, and it is a function of the image coordinates (u, v) of the contact point between the obstacles and the ground. In order to do that,

The classification divides the obstacles into two groups: pedestrians and non-pedestrians. The result of the classification algorithm is a confidence score for the fact that the obstacle is a pedestrian; it is compared with a threshold and if it is greater, the obstacle is classified as a pedestrian. This classification is based on the similarity between the vertical projection of the silhouette and the histogram of a normal distribution.

In order to characterize the vertical projection, the standard deviation, σ, is computed as if the vertical projection was the histogram of a normal distribution. In order not to make the standard deviation be a function of the obstacle dimension or independent on the obstacle localization, the standard deviation is divided by the width of the ROI getting σ_{w}. This standard deviation will be used to compute the score.

Several vertical projections of pedestrian have been processed to obtain their standard deviations; these standard deviations follow a normal distribution N(μ_{σw},σ_{σw}). In order to compute the score for an obstacle, its standard deviation is used to obtain the value of the probability density function, where the maximum score 100% is produced if the standard deviation is equal to μ_{σw} and gets worse when the standard deviation is different from μ_{σw} (

Once the sensors and their corresponding algorithms have taken measurements individually and processed them in order to identify and classify objects as pedestrians, it is necessary to provide the intelligent system with tools that track the pedestrians and alert the driver to possible imminent collisions. In this section statistical models are developed to robustly infer the possible routes based on the current position as well as near past locations.

Inference is the part of statistics that relates sample information with probability theory in order to estimate or predict the value of one or several unknown parameters or compare two hypotheses about their value. The hypotheses are called null (which is the one statistically proven to be true or false according to a pre-specified confidence level γ) and alternative (which is chosen whenever the null is rejected).

In individual hypothesis testing about a single parameter φ, an observed value x is compared against a threshold value that result of the application of the confidence level γ, and a decision is taken by deciding to reject or not reject (accept) the null hypothesis.

It is well known, however, that two errors can occur when a decision of this kind is made about the value of one parameter: the null hypothesis is rejected when it should have been accepted (false positive or false detection, significance level = ω =1 − γ) or accepted when it should have been rejected (false negative or false standard, probability = β, testing power = 1− β).

In multiple testing (

If ω is used in each individual test, the probability of “false positive” errors increases considerably: the probability of accepting only and all null hypotheses when they are true is only (1−ω)^{O}. Global confidence is reduced, and is therefore not 1−ω but 1−Ω, where Ω is the global level of significance, much higher (worse) than the theoretically desired ω. On the other hand, when a large number of null hypotheses are rejected by this procedure, the error of failing to discover relevant alternative hypotheses is practically zero. In other words, as the relevant hypotheses are overestimated, practically all non-rejected null hypotheses are really standard. This approach therefore favours the determination of all significant and some other hypotheses (false positives, which could be numerous) as relevant, in exchange for having no false negatives.

Therefore, in order to make a correct decision for an aggregate level of significance Ω, and prevent too many false positives from occurring, the form in which individual testing is performed has to be adjusted. Individual tests have traditionally been maintained, although the level ω has been adjusted. Usually, ω is adjusted and controlled in two different ways.

The first is known as the Bonferroni correction [

The same occurs with the Sidak adjustment [

Robust multivariate hypothesis testing involves the simultaneous comparison of sample values with thresholds. One possible way to set the thresholds is to use confidence intervals, whose main purpose is the estimation of the value of one or several unknown parameter. Confidence intervals come in the form of confidence limits or prediction thresholds. If a new sample value lies within the confidence limits, the value is said to be homogeneous with the previous values, accepting the null hypothesis of belonging to the same underlying distribution. Otherwise, the value is said to belong to a different distribution, rejecting the null and accepting the alternative.

Confidence intervals (CI) on a single unknown parameter are a means to set thresholds on its values and have the form φ∈[ φ_{−}, φ_{+}] or CI_{φ} or alternatively φ∈ [φ^{*}-k_{1} (V(φ^{*}))^{1/2}^{*}+k_{2} (V(φ^{*})^{1/2})], where φ* is the point estimator of the parameter and V(φ*) the variance of the estimator.

The estimation problem relates therefore to the proper identification of the distribution of the control statistic and the associated probability calculation based on the confidence level that results in the k_{i} values. If the distribution is not known, an upper threshold on the value of the k_{i} might be however readily calculated using Chebishev’s inequality,

However, in most real situations, there are more than one variable or parameter involved in the decision making or inference process. In other words, confidence intervals are to be set for several parameters, which are usually correlated.

The first possibility is to set individual and independent confidence intervals for each and every variable, but adjusting the confidence percentage to account for the multivariate situation according to Bonferroni’s or Sidak’s corrections, as well as the setting bounds using Chebishev’s inequality. The corresponding confidence region comes in the form of a rectangle in two dimensions, a prism is three dimensions and a polyhedron in more dimensions [

However, it has been shown that if the variables are correlated, the false negative rate is very high, since the response area covered by the combination of individual CI’s is much larger than what it should be. The corresponding area should come in the form of an ellipse in two dimensions and an ellipsoid in higher dimensions.

The equation of the ellipsoid, or the ellipse in two dimensions, in terms of the Mahalanobis standardized distances of each point to the center of the ellipse, E, is as follows [

Multivariate statistical models are ready to be particularized in this article to the movement of pedestrians. The data is measured at each time t individually from the sensors: (x_{s,t,i}; y_{s,t,i}) where s = 1, …, S sensors and s = f for the fused values with as many measurements as objects i = 1, … I are detected.

The values in absolute units are also transformed into movements or displacements Δ^{l}x_{s,t,i} and Δ^{l}y_{s,t,i}, where l = 1,…L accounts for the time interval used to calculate the displacements:

The first set of models relate to the control of the stability of the displacements. For each object i, the last C values are used to calculate the averages on the moves
^{l}_{s,t,i}^{l}_{s,t,i}^{l}_{s,t,i}

The confidence intervals might readily be calculated using Chebishev’s inequality and Sidak’s corrections:

Similarly, the confidence regions or ellipsoids (^{l}_{s,t,i}^{l}_{s,t,i}

The second set of models are used to determine where the object is going to be at t + l, by just adding the average observed move to the current position:

The total number of prediction models, M, is M = (S + 1) * I * L * 3.

The last stage in this multivariate assessment is the location of the pedestrians. After the discussion in the previous sections, the information available at each time t is:

The measurements from the sensors.

The confidence bounds or validation gates for the prediction of moves for each object and lag, for each sensor and dimension as well as for the fused data, and in combined confidence regions

The validation gates for the prediction of location for each object and lag, for each sensor and dimension as well as for the fused data, and in combined confidence regions. The algorithm then must confront the raw data, the lagged data and the fused data with the validation gates and prediction regions so as to assign the measurements to an existing or new object. There exist several possible results:

All the validation gates and confidence regions are positively met for one of the existing objects. The measurement is assigned to that object, which continues to be a pedestrian or another object, fixed or not.

None of the validation gates or confidence regions are met. A new object is created and starts to be tracked.

If either the gates for the stability of moves or the position gates are met, due to a no-read or a sudden change in direction or velocity, the measurement is assigned to same object which continues to be tracked.

The following experiments have been carried with the IvvI vehicle outdoors in order to evaluate the robustness and reliability of the proposed detection and tracking algorithm.

The data obtained out of the sensorial system has been used to test the performance of the fusion algorithm under different real conditions: crossings of pedestrians while moving in zigzag and changes of speeds.

The IvvI vehicle is first set on the road to test the proposed intelligent fusion-based tracking system outdoors, where pedestrians wander following both linear and non-linear paths.

Two pedestrians move for 29.2 seconds (292 frames) in front of the vehicle following the paths included in

The parameter S, the number of sensors, is set to S = 2, as a camera and a laser are used to obtain data from the environment. The parameter I, the count of objects, is set to I = 2, as two are the pedestrians being tracked. The parameter L, the number of time intervals, is set to L = 3 to allow for a quick execution of the algorithm. The parameter M, or the number of simultaneous tests that are performed at each t is M = (S + 1) * I * L * 3 = 54. The parameter C, or the number of past data used to calculate the trajectories and the moves, is set to C = 10, since that is the value corresponding to the number of maximum frame rate of the camera. The parameter Ω, or the overall significance level, is set to 5%.

Therefore:

D = k * 7/1.5 = 119.89 ≅ 120.

The cross happens in between frame number 70 to 90, or 2 second. The information provided by the two sensor systems, as well as the result of the tracking algorithm are included in

The tracking images show the ellipses corresponding to a time interval of 1 frame. At the time of the crossing, the algorithm is only able to classify one pedestrian. The result is a larger prediction region that covers both pedestrians. It also allows for the tracking of both as depicted in the figures corresponding to frames 95 and 100.

The pedestrian changes directions between frames 205 and 270 for more than 6 seconds (

The hit rate per pedestrian (

If C consecutive reads were not available, then it is not possible to calculate neither an average of the data nor a filtered value, and thus, the hit rate diminishes.

The percentage is over 85% and should increase in uncontrolled environments whenever the variability of the measures is higher. In controlled environments, the pedestrian knows that it is being tracked and acts consistently so that the calculated standard deviation is smaller than it is when variability is speed and direction is more likely to occur. In fact, the hit rate is higher for the first pedestrian who at one point follows a zigzag pattern.

An additional experiment is performed to assess the performance of the algorithm when the pedestrian is changing speeds while the vehicle is moving (

The vehicle starts detecting the pedestrian about 18 meters before the zebra crossing (

Sensors are ubiquitous in today’s world, although it is necessary to give them with autonomy to process the information they get from the environment. Our research aims at developing intelligent sensors in a demanding field like Intelligent Transportation Systems. More specifically this article addresses the problem of the identifying and classifying objects and pedestrian so as to help drivers to avoid accidents.

Developments in both hardware and software are necessary to create robust and intelligent application in Advanced Driver Assistance Systems. The sensorial fusion of a laser and a computer vision system as well as a classification algorithm has proven successful for the tracking of pedestrians that cross and wander in zigzag in front of the vehicle.

Original algorithms have been developed for classification and tracking. A new approach to pedestrian detections based on a laser and variable models has been presented, giving an estimation of how close they are to the ideal pattern for a pedestrian. Regarding the stereo-vision subsystems two original contributions are worth mentioning. First, the implementation of the disparity map construction with the cross-checking and the u-v disparity using CUDA in order to obtain a real time system. Second, a novel and fast procedure for pedestrian identification using the silhouette of the stereo image has been presented. The success of the matching procedure is based on the application of non parametric multivariate statistics to the localization problem while tracking pedestrians. More specifically, the Sidak correction has been applied to calculate the proper multivariate significance level, the Chebishev inequality has been used to account for non-normality and confidence regions have been calculated to determine the positioning of the pedestrians in the upcoming frames. Two other contributions have made for the robustness of the algorithm. The use of movements and not raw measurements has allowed for the proper control and dimensioning of the confidence regions. The check for stability of the measurements prior to the calculation of the predictions has also increased the hit ratio while recognizing and classifying pedestrians. All experiments have been performed in real environments using the IvvI research platform, where all the algorithms have been implemented and tested.

Improvements should be made on the perception as well as the tracking systems to improve the hit rate. The classification of the obstacles detected by the stereo system can have more features into account. Once the obstacles have been detected and their size and distanced to vehicle found, methods that use several image features like [

This work was supported by the Spanish Government through the Cicyt projects VISVIA (GRANT TRA2007-67786-C02-02) and POCIMA (GRANT TRA2007-67374-C02-01).

IvvI research platform.

Tracking framework.

Environment information given by the algorithm after obstacle segmentation.

Pattern given by a pedestrian, according to leg situation. This pattern may appear rotated.

Model for pedestrian where the two angles of interest are detailed.

Two examples of pedestrians, their silhouette and their vertical projection.

Process scheme to obtain a pedestrian score.

Confidence regions in two dimensions.

Pedestrian detection by the IvvI vehicle.

Paths followed by pedestrians.

Test-related decision-making problem.

CORRECT DECISION | ω - False positive or relevance | ||

β - False negative or standard | CORRECT DECISION |

Decision-making problem in multiple testing’s.

_{0,m} |
_{0,m} |
||
---|---|---|---|

Distribution of detections per frame.

TOTAL | 292 | 292 | 292 | 192 | 192 | 192 | 292 | 292 | 292 | |
---|---|---|---|---|---|---|---|---|---|---|

0 | 66 | 131 | 34 | 31 | 148 | 27 | 0 | 36 | 0 | |

1 | 226 | 161 | 125 | 161 | 44 | 125 | 288 | 172 | 35 | |

2 | 133 | 40 | 4 | 68 | 171 | |||||

3 | 15 | 69 | ||||||||

4 | 1 | 16 | ||||||||

5 | 1 |

Hit rates per pedestrian.

PEDESTRIAN 1 | PEDESTRIAN 2 | |
---|---|---|

VISION | 77.40 % | 85.64 % |

LASER | 56.51 % | 23.40 % |

FUSION | 88.36 % | 87.77 % |

Hit rate per model.

STABILITY | 95.83 % | 95.03 % |

TRACKING | 93.51 % | 86.16 % |

AT LEAST ONE | 98.46 % | 96.18 % |

BOTH | 90.98 % | 85.28 % |