^{1}

^{*}

^{1}

^{2}

^{3}

^{3}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

In a biometric authentication system using protected templates, a pseudonymous identifier is the part of a protected template that can be directly compared. Each compared pair of pseudonymous identifiers results in a decision testing whether both identifiers are derived from the same biometric characteristic. Compared to an unprotected system, most existing biometric template protection methods cause to a certain extent degradation in biometric performance. Fusion is therefore a promising way to enhance the biometric performance in template-protected biometric systems. Compared to feature level fusion and score level fusion, decision level fusion has not only the least fusion complexity, but also the maximum interoperability across different biometric features, template protection and recognition algorithms, templates formats, and comparison score rules. However, performance improvement via decision level fusion is not obvious. It is influenced by both the dependency and the performance gap among the conducted tests for fusion. We investigate in this paper several fusion scenarios (multi-sample, multi-instance, multi-sensor, multi-algorithm, and their combinations) on the binary decision level, and evaluate their biometric performance and fusion efficiency on a multi-sensor fingerprint database with 71,994 samples.

To achieve the purpose of biometric template protection, standard encryption mechanism, such as DES, AES, …,

On the other hand, the usability of biometric-enabled authentication systems demands well-preserved biometric performance from the protected templates compared to their plain counterparts. While template protection algorithms themselves are struggling in improving recognition performance, fusion based on multi-biometrics [

The remaining part of this work is organized as follows: Section 2 presents the concepts of pseudonymous identifiers used for biometric template protection and the tested template protection algorithms in this work; Section 3 provides background information of this performance evaluation work including the tested database, fusion scenarios, and testing settings; Section 4 presents the biometric performance testing results under different fusion scenarios; Section 5 gives a brief evaluation and analysis over the performance testing results. Section 6 concludes this work.

A reference architecture (shown in

A Pseudonymous identifier (PI) derived from biometric features is defined in [

A RBR is required to have enough irreversibility and unlinkability [

The minutiae feature is a standardized fingerprint feature that is widely adopted by existing fingerprint recognition systems. Template protection algorithms developed for minutiae features can thus be applied on any existing minutiae template based fingerprint recognition systems. In the TUBIRNE project, three fingerprint minutiae based template protection algorithms [

Algorithm 1 (Spectral minutiae based fuzzy commitment): its feature extraction was based on [

Algorithm 2 (Minutiae vicinity based distance binarization): it was based on [

Algorithm 3 (Minutiae vicinity based dynamic random projection): it was based on [

The above-mentioned three developed algorithms in the TURBINE project represent three types of ideas for biometric template protection: (1) binary secret binding and release realized by fuzzy schemes (Algorithm 1) with binary and fixed-length PI; (2) hybrid (software + hardware) scheme (Algorithm 2) with binary and varied-length PI; and (3) irreversible transformation (Algorithm 3) with binary and varied-length PI. More information including biometric performance and security analysis of these three algorithms can be found in the references given in the above algorithm descriptions and the reference [

Note that in the following sections, due to project partners' request, the above three algorithms are _{i}_{i}

Due to the challenging quality of the collected fingerprint samples in the GUC100 database, none of above-mentioned individual algorithms proposed by TURBINE met the performance target of the project. Hence, decision level fusion was proposed as a fallback plan to improve the recognition performance which finally achieved the goal. Decision level fusion was considered in this case just because the three algorithms have different characteristics and also different score and threshold setting mechanisms. It is also highly efficient to implement and configure in performance testing.

Although score-level fusion is preferred in many multi-modality biometrics [

Note that the argument that decision level fusion can improve the biometric performance does not always hold but strongly depends on the assumption of independence and marginal performance gap among the elements for fusion [

In TURBINE experiments, we observed some performance gaps among sensors and algorithms, but not noticeably among samples and instances. Whether these performance gaps will degrade the performance improving effort by applying fusion will be investigated in the experimental section. On the other hand, we assume there is dependence among the decisions obtained from different samples, sensors, and algorithms; and the dependence could even be among different instances from the same subject. To which degree such dependence among elements for fusion will degrade the performance improving effort by applying fusion will also be investigated. We describe different fusion scenarios and testing settings in the following sub-section.

The GUC100 fingerprint database contains fingerprint samples collected in Norway in the winter-spring season of 2008. The samples are challenging in quality due to the cold and dry Scandinavian weather conditions, even though the collection took place inside a standard office room. This database was predominately used for testing purposes in the course of the TURBINE project and is also freely available to other researchers provided that testing is conducted at the GUC (Gjøvik University College) premises. It is a multi-sensor fingerprint database which had been created for independent and in-house performance and interoperability testing of third party algorithms. The samples were collected by six different fingerprint sensors, namely the TST BiRD 3, L-1 DFR 2100, Cross Match L SCAN 100, Precise 250 MC, Lumidigm V 100 and Sagem MorphoSmart (as shown in

Decision level fusion can be done in five scenarios [

Multi-sample fusion (

Multi-algorithm fusion;

Multi-sensor fusion;

Multi-instance fusion;

Multi-sample-algorithm fusion;

Multi-sample-sensor fusion;

Multi-sample-instance fusion;

Multi-sample-instance-algorithm fusion.

For the bi- and tri-layer fusion scenarios, we use the multi-sample fusion scenario as the basic layer because multi-sample is the easiest and also the most practically convenient scenario to realize fusion in a typical single-sensor-and-single-algorithm fingerprint recognition system—just to continuously capture multiple samples from one time of finger probing while asking the subject to slightly move the finger or modify the finger's contacting surface.

In our experiments, in light of testing efficiency, only binary decisions obtained from the left index finger (assumed to generating relatively good samples in quality) samples from 99 subjects (excluding one subject without full records) in 12 sessions are included for the first three fusion tests (multi-sample, multi-sensor, and multi-algorithm) and the right index finger is also used for the fourth fusion test (multi-instance). For the multi-sample fusion case, only binary decisions obtained from the a certain fingerprint sensor S and a minutiae extractor selected from a certain project partner have been fused, since in general best performances were observed from this sensor/minutiae-extractor combination. For the multi-sensor fusion, the three sensors L-1 DFR 2100, Precise 250 MC, and Sagem MorphoSmart in cooperation with the selected minutiae extractor are adopted. In this sensor fusion scenario, only the best biometric template protection algorithm A1 was employed for testing since in general best performances were observed from this algorithm/minutiae-extractor combination. Concerning both multi-algorithm fusion and multi-instance fusion, the sensor S and the selected minutiae extractor are used. To obtain the genuine scores for each subject, each sample out of the 12 samples (from 12 sessions) of the same finger was compared against the remaining 11 samples. In this way totally 12 × 11 × 99 = 13,068 genuine decisions were obtained. Each sample out of the 12 samples of the ^{th} subject (^{th}) subjects. Note that the last subject (No. 99) is only contributing to the genuine decisions but not to the imposter decisions. In this way, Σ^{98}_{i = 1}12

To investigate decision level fusion's baseline effectiveness on performance improvement, we test only the simplest fusion rules such as AND and OR. In general, the decision OR rule may reduce the False Rejection Rate (FRR) but increases the False Acceptance Rate (FAR), whereas the decision AND rule may reduce the FAR but increases the FRR. This will be absolutely true if all the biometric tests are independent [_{1} and _{2} correspond to two independent biometric tests; and the case using AND:

In our tests, the OR-rule is adopted because in the TURBINE project a FAR = 0.1% was rigidly required to assure the security of the biometric authentication system. Under this context, the performance operation points in the Detection Error Trade-off (DET) curve before fusion are mainly distributed in the FAR range < 0.1% for all the three algorithms and thus OR is suitable to approach the performance target FAR = 0.1% while greatly reducing FRR. Although the independence condition may not be well satisfied in the fusion scenarios, we still expect the effect of reduced FRR brought by OR. The specific settings for the different fusion scenarios are given in the following sub-section.

There are both some common settings among all the scenarios and some special settings for each different scenario. _{i}

Besides the concise presentation of all the scenarios settings in the

The studied left-index-finger samples were captured by the sensor S and processed into minutiae templates by the selected minutiae extractor. _{1}, d_{2}, …, d_{11}, the 11 fused binary decisions are (Σ^{M}_{i = 1}d_{i}^{M}^{+1}_{i = 2}d_{i}^{M+2}_{i = 3}d_{i}^{11}_{i = 10}d_{i}^{M-2}_{i = 1}d_{i}^{11}_{i = 11}d_{i}^{M-1}_{i = 1}d_{i}

The studied left-index-finger samples were captured by the sensor S and processed into minutiae templates by the selected minutiae extractor. By an arbitrary template protection algorithm, there are 13,068 genuine comparison decisions and 58,212 imposter comparison decisions that can be obtained. Every two algorithms among A1, A2, and A3 are fused to generate (13,068 + 58,212) binary decisions. Besides, all the 3 × (13,068 + 58,212) binary decisions obtained from all of A1, A2, and A3 are fused into (13,068 + 58,212) binary decisions.

The studied left-index-finger samples were processed into minutiae templates by the selected minutiae extractor. From data collected from an arbitrary sensor, there are 13,068 genuine comparison decisions and 58,212 imposter comparison decisions that can be obtained. Every two sensors among S1, S2, and S3 are fused to generate (13,068 + 58,212) binary decisions. All the 3 × (13,068 + 58,212) binary decisions from the three sensors S1, S2, and S3 are fused into (13,068 + 58,212) binary decisions. In this scenario, the template protection algorithm A1 is used for testing since it in general demonstrated the best biometric performance.

The studied left-index-finger and right-index-finger samples were captured by the sensor S and processed into minutiae templates by the selected minutiae extractor. On an arbitrary finger, there are 13,068 genuine comparison decisions and 58,212 imposter comparison decisions that can be obtained. Both the (13,068 + 58,212) binary decisions from the left index finger and the (13,068 + 58,212) binary decisions from the right index finger are fused into (13,068 + 58,212) binary decisions. All three algorithms, A1, A2, and A3, are for tests.

This scenario further fuses the multi-sample fusion results obtained respectively from the three template protection algorithms A1, A2, and A3 into (13,068 + 58,212) binary decisions,

This scenario further fuses the multi-sample fusion results obtained respectively from the samples captured by the three sensors S1, S2, and S3 into (13,068 + 58,212) binary decisions,

This scenario further fuses the multi-sample fusion results obtained respectively from the left and the right index fingers into (13,068 + 58,212) binary decisions,

This scenario further fuses the multi-sample-instance fusion results obtained respectively from the three template protection algorithms A1, A2, and A3 into (13,068 + 58,212) binary decisions,

The computational complexity in the cross-comparisons in performance evaluation phase of the TURBINE project was so large that it took months for four workstations to complete just a small scale (settings specified in Section 3.2) test under a limited range of parameters combinations. To achieve better testing efficiency but without loss of generality, all three template protection algorithms A1, A2, and A3 generate only four operational performance points distributed mainly in the FAR range ≤ 0.1% in the DET curves instead of a dense-populated DET curve before applying any fusion. Another reason that we chose to generate sparse-populated DET operation points is that the mathematical hash based algorithm—spectral minutiae based fuzzy commitment—only generate binary decisions from template comparisons and thus is not straightforwardly able to generate dense-populated operation points by thresholding the comparison scores like other two algorithms. As mentioned in Section 2 and 3, for the request from the project partners, we anonymize the three algorithms and the three sensors in the testing results and show only algorithm flags A1, A2, A3 and sensor flags S1, S2, S3 in all figures and discussions,

The algorithm A1 demonstrates the best performance but its performance point (FRR = 0.0256, FAR = 0.0010, while

Since from each sensor, four performance points were generated, any two sensors' fusion generates 4 × 4 = 16 performance points, and all three sensors' fusion generates 4 × 4 × 4 = 64 performance points. It is observed that the performance can be improved by fusion of decisions from different fingerprint sensors. Sensor S3 demonstrates a better performance compared to S1 and S2. The fusion of decisions from all three sensors resulted in the best performance but its performance point (FRR = 0.0340, FAR = 0.0010) closest to the target has not reached the target yet.

It is observed that the performance can be improved by increasing the number of samples used for fusion. This fusion test resulted in better performance than its single-layer fusion scenario (1) multi-sample and (4) multi-instance. It has two performance points (FRR = 0.0079, FAR = 0.0008, while

It has two performance points (FRR = 0.0087, FAR = 0.0008, while

This section summarizes some experimental observations from our testing results. To fairly compare the fusion efficiency across different fusion scenarios, we evaluate the fusion efficiency via two criteria – efficiency per decision and efficiency per presentation.

From the decision level fusion testing results, we can summarize the following experimental observations in a question (

Besides the above general observations, we found that only scenario (7) multi-sample-instance fusion and scenario (8) multi-sample-instance-algorithm fusion have some performance points that meet the recognition accuracy performance target for the project TURBINE. The best fusion result [FRR = 0.0060, FAR = 0.0009, while

Considering the number of decisions and convenience of finger probing can vary in different fusion scenarios, we need to evaluate the fusion efficiency in addition to the effectiveness we observed from the testing results and discussed in Section 5.1. To compare the fusion efficiency of different fusion scenarios in a more precise way than the general evaluation given in Section 5.1, we propose two criteria—efficiency per decision and efficiency per presentation—to evaluate the fusion efficiency.

Efficiency per decision is defined as the performance achievable by fusing the same amount of decisions,

Efficiency per presentation is defined as the performance achievable under the same amount of finger presentations (times of probing), which takes into account the convenience to the subjects. Since the effort required is almost the same under the same amount of finger presentations, the performance is more comparable. This efficiency criterion matters mainly on the multi-algorithm fusion since it needs only one finger presentation.

We presented in this paper an evaluation of decision level fusion results of fingerprint minutiae based pseudonymous identifiers generated by three biometric template protection algorithms developed in the European research project TURBINE. There are eight different fusion scenarios covering multiple samples, algorithms, sensors, instances, and their combinations in our tests. Distinct biometric performance improvements were observed by decision level fusion which verifies the hypothesis to apply fusion to improve the recognition accuracy performance. For fair comparisons of achieved performance improvement, two fusion efficiency criteria were proposed to evaluate the different scenarios' fusion efficiency.

Future work will investigate other fusion rules such as AND, layered, and cascaded fusion; and also the in-depth implications of performance improvement by the fusion rule OR and the dependency among elements for fusion. With regards to privacy, we assume that the template protection algorithms used are secure and privacy-enhanced. However, decision level fusion implies linkability among the protected templates generated out of different samples, sensors, instances, and algorithms from the same biometric characteristic (in the sample, sensor, and algorithm cases) and the same subject (in the instance case). Whether or not this fact influences the protected templates' security is quite algorithm-dependent, and therefore needs case-based security analysis.

This work is funded under the 7th Framework Programme of the European Union, Project TURBINE (ICT-2007-216339) and FIDELITY (SEC-2011-284862). All information is provided as is and no guarantee or warranty is given that the information is fit for any particular purpose. The European Commission has no liability in respect of this document, which merely represents the authors' view.

Reference architecture of template protection defined in [

Multi-sample fusion results from the three anonymized biometric template protection algorithms A1, A2, and A3 developed in project TURBINE.

Multi-algorithm fusion results by the three anonymized biometric template protection algorithms A1, A2, and A3 developed in project TURBINE.

Multi-sensor fusion results from the three anonymized biometric sensors S1, S2, and S3 tested in project TURBINE, using algorithm A1.

Multi-instance fusion results from the left and the right index fingers tested using the three algorithms A1 (Set I), A1 (Set II), A2, and A3 developed in project TURBINE.

Multi-sample-algorithm fusion results from the tested three algorithms A1 (Set I), A1 (Set II), A2, and A3 developed in project TURBINE.

Multi-sample-sensor fusion results from the three sensors S1, S2 and S3, using the algorithm A1 (Set I) and A1 (Set II) developed in project TURBINE.

Multi-sample-instance fusion results from the left and the right index fingers, using the algorithms A1 (Set I), A1 (Set II), A2, and A3 developed in project TURBINE.

Multi-sample-instance-algorithm fusion results from the left and the right index fingers fusing the algorithms A1 (Set I), A1 (Set II), A2, and A3 developed in project TURBINE.

Example of performance degradation after decision-level fusion happened in the multi-algorithm scenario (the two big green diamonds denote the two operation points contributing to the small red diamond by fusion).

Example of two classification tests to verify the correlation in fingerprint images' quality from two same-subject index fingers. Samples are from one session of the GUC100 database. Classification test 1 (green curve): same-subject index finger samples are used to calculate genuine NFIQ score distances; classification test 2 (red curve): different-subject index finger samples are used to calculate “genuine” NFIQ score distances.

Fusion efficiency by “efficiency per decision”. (“sample”, “algorithm”, “sensor”, and “instance” are abbreviated as “sam.”, “alg.”, “sen.”, and “ins.”, respectively).

Fusion efficiency by “efficiency per presentation”. (“sample”, “algorithm”, “sensor”, and “instance” are abbreviated as “sam.”, “alg.”, “sen.”, and “ins.”, respectively)

Common settings for all fusion scenarios.

Algorithm | A1 (Annonymized flag) |

Sensor | S (one of S1, S2, and S3) |

Finger instance | Left index finger |

Minutiae extractor | Selected one from one project partner |

Number of genuine scores generated from each algorithm/sensor/finger | 13,068 |

Number of imposter scores generated from each algorithm/sensor/finger | 58,212 |

Special settings for different fusion scenarios.

Multi-sample | A1, A2, A3 | S | left index | 2∼5 |

Multi-algorithm | A1 + A2, A2 + A3, A1 + A3, A1 + A2 + A3 | S | left index | 1 |

Multi-sensor | A1 | S1 + S2, S2 + S3, S1 + S3, S1 + S2 + S3 | left index | 1 |

Multi-instance | A1, A2, A3 | S | left index + right index | 1 |

Multi-sample-algorithm | A1 + A2 + A3 | S | left index | 2∼5 |

Multi-sample-sensor | A1 | S1 + S2 + S3 | left index | 2∼5 |

Multi-sample-instance | A1, A2, A3 | S | left index + right index | 2∼5 |

Multi-sample-instance-algorithm | A1 + A2 + A3 | S | left index + right index | 2∼5 |

Best performance (operation points closest to the performance target FRR ≤ 0.01 @ FAR = 0.001)—multi-sample fusion.

1 (without fusion) | FAR | 0.0009 | 0.0002 | 0.0009 |

FRR | 0.0647 | 0.1860 | 0.1985 | |

| ||||

2 | FAR | 0.0012 | 0.0004 | 0.0005 |

FRR | 0.0360 | 0.0952 | 0.1267 | |

| ||||

3 | FAR | 0.0007 | 0.0007 | 0.0008 |

FRR | 0.0318 | 0.0667 | 0.0882 | |

| ||||

4 | FAR | 0.0010 | 0.0009 | 0.0011 |

FRR | 0.0256 | 0.0529 | 0.0690 | |

| ||||

5 | FAR | 0.0012 | 0.0011 | 0.0014 |

FRR | 0.0215 | 0.0439 | 0.0576 |

Best performance (operation points closest to the performance target FRR ≤ 0.01 @ FAR = 0.001)—multi-algorithm fusion.

FAR | 0.0008 | 0.0012 | 0.0010 | 0.0010 |

FRR | 0.0597 | 0.1078 | 0.0614 | 0.0586 |

Best performance (operation points closest to the performance target FRR ≤ 0.01 @ FAR = 0.001)—multi-sensor fusion.

FAR | 0.0011 | 0.0010 | 0.0011 | 0.0013 |

FRR | 0.0315 | 0.0340 | 0.0273 | 0.0182 |

Best performance (operation points closest to the performance target FRR ≤ 0.01 @ FAR = 0.001)—multi-instance (left and right index fingers) fusion.

FAR | 0.0012 | 0.0010 | 0.0004 | 0.0013 |

FRR | 0.0247 | 0.0253 | 0.0740 | 0.0835 |

Best performance (operation points closest to the performance target FRR ≤ 0.01 @ FAR = 0.001)—multi-sample-algorithm fusion.

2 | FAR | 0.0010 | 0.0011 |

FRR | 0.0312 | 0.0317 | |

| |||

3 | FAR | 0.0009 | 0.0011 |

FRR | 0.0265 | 0.0268 | |

| |||

4 | FAR | 0.0012 | 0.0010 |

FRR | 0.0208 | 0.0222 | |

| |||

5 | FAR | 0.0015 | 0.0008 |

FRR | 0.0175 | 0.0186 |

Best performance (operation points closest to the performance target FRR ≤ 0.01 @ FAR = 0.001)—multi-sample-sensor fusion.

2 | FAR | 0.0025 | 0.0019 |

FRR | 0.0087 | 0.0094 | |

| |||

3 | FAR | 0.0037 | 0.0029 |

FRR | 0.0055 | 0.0060 | |

| |||

4 | FAR | 0.0049 | 0.0038 |

FRR | 0.0040 | 0.0044 | |

| |||

5 | FAR | 0.0061 | 0.0048 |

FRR | 0.0033 | 0.0034 |

Best performance (operation points closest to the performance target FRR ≤ 0.01 @ FAR = 0.001)—multi-sample-instance (left and right index fingers) fusion.

2 | FAR | 0.0009 | 0.0008 | 0.0007 | 0.0006 |

FRR | 0.0142 | 0.0149 | 0.0295 | 0.0497 | |

| |||||

3 | FAR | 0.0013 | 0.0011 | 0.0011 | 0.0009 |

FRR | 0.0088 | 0.0094 | 0.0183 | 0.0323 | |

| |||||

4 | FAR | 0.0017 | 0.0008 | 0.0014 | 0.0012 |

FRR | 0.0064 | 0.0079 | 0.0132 | 0.0239 | |

| |||||

5 | FAR | 0.0021 | 0.0009 | 0.0018 | 0.0015 |

FRR | 0.0049 | 0.0060 | 0.0109 | 0.0188 |

Best performance (operation points closest to the performance target FRR ≤ 0.01 @ FAR = 0.001)—multi-sample-instance-algorithm fusion.

2 | FAR | 0.0010 | 0.0009 |

FRR | 0.0117 | 0.0123 | |

| |||

3 | FAR | 0.0015 | 0.0008 |

FRR | 0.0070 | 0.0087 | |

| |||

4 | FAR | 0.0020 | 0.0010 |

FRR | 0.0049 | 0.0060 | |

| |||

5 | FAR | 0.0025 | 0.0013 |

FRR | 0.0036 | 0.0043 |

Fusion scenario settings for fusion cases by “efficiency per decision”.

Decision amount for fusion | Fusion case | Scenario settings |
---|---|---|

| ||

Fusion from 2 decisions | 2 samples | Multi-sample, |

2 sensors | Multi-sensor, S1 + S3 | |

2 algorithms | Multi-algorithm, A1 + A2 | |

2 instances | Multi-instance, A1 | |

| ||

Fusion from 3 decisions | 3 samples | Multi-sample, |

3 sensors | Multi-sensor, S1 + S2 + S3 | |

3 algorithms | Multi-algorithm, A1 + A2 + A3 | |

| ||

Fusion from 4 decisions | 4 samples | Multi-sample, |

2 samples and 2 instances | Multi-sample-instance, | |

| ||

Fusion from 6 decisions | 3 samples and 2 instances | Multi-sample-instance, |

2 samples and 3 algorithms | Multi-sample-algorithm, | |

2 samples and 3 sensors | Multi-sample-sensor, | |

| ||

Fusion from 9 decisions | 3 samples and 3 algorithms | Multi-sample-algorithm, |

3 samples and 3 sensors | Multi-sample-sensor, | |

| ||

Fusion from 12 decisions | 4 samples and 3 algorithms | Multi-sample-algorithm, |

4 samples and 3 sensors | Multi-sample-sensor, | |

2 samples and 2 instances and 3 algorithms | Multi-sample-instance-algorithm, | |

| ||

Fusion from 15 decisions | 5 samples and 3 algorithms | Multi-sample-algorithm, |

5 samples and 3 sensors | Multi-sample-sensor, |

Fusion scenario settings for fusion cases by “efficiency per presentation”.

| ||
---|---|---|

Fusion from single presentation | A1 | Without fusion, A1 |

A2 | Without fusion, A2 | |

A3 | Without fusion, A3 | |

3 algorithms | Multi-algorithm, A1 + A2 + A3 | |

| ||

Fusion from 2 presentations | 2 samples | Multi-sample, |

2 sensors | Multi-sensor, S1 + S3 | |

2 instances | Multi-instance, A1 | |

2 samples and 3 algorithms | Multi-sample-algorithm, | |

| ||

Fusion from 3 presentations | 3 samples | Multi-sample, |

3 sensors | Multi-sensor, S1 + S2 + S3 | |

3 samples and 3 algorithms | Multi-sample-algorithm, | |

| ||

Fusion from 4 presentations | 4 samples | Multi-sample, |

2 sample and 2 instances | Multi-sample-instance, | |

4 samples and 3 algorithms | Multi-sample-algorithm, | |

2 samples and 2 instances and 3 algorithms | Multi-sample-instance-algorithm, | |

| ||

Fusion from 5 presentations | 5 samples | Multi-sample, |

5 samples and 3 algorithms | Multi-sample-algorithm, | |

| ||

Fusion from 6 presentations | 3 sample and 2 instances | Multi-sample-instance, |

3 samples and 2 instances and 3 algorithms | Multi-sample-instance-algorithm, | |

| ||

Fusion from 8 presentations | 4 sample and 2 instances | Multi-sample-instance, |

4 samples and 2 instances and 3 algorithms | Multi-sample-instance-algorithm, | |

| ||

Fusion from 9 presentations | 3 samples and 3 sensors | Multi-sample-sensor, |

| ||

Fusion from 10 presentations | 5 sample and 2 instances | Multi-sample-instance, |

5 samples and 2 instances and 3 algorithms | Multi-sample-instance-algorithm, |