Comparison of Read Mapping and Variant Calling Tools for the Analysis of Plant NGS Data.

High-throughput sequencing technologies have rapidly developed during the past years and have become an essential tool in plant sciences. However, the analysis of genomic data remains challenging and relies mostly on the performance of automatic pipelines. Frequently applied pipelines involve the alignment of sequence reads against a reference sequence and the identification of sequence variants. Since most benchmarking studies of bioinformatics tools for this purpose have been conducted on human datasets, there is a lack of benchmarking studies in plant sciences. In this study, we evaluated the performance of 50 different variant calling pipelines, including five read mappers and ten variant callers, on six real plant datasets of the model organism Arabidopsis thaliana. Sets of variants were evaluated based on various parameters including sensitivity and specificity. We found that all investigated tools are suitable for analysis of NGS data in plant research. When looking at different performance metrics, BWA-MEM and Novoalign were the best mappers and GATK returned the best results in the variant calling step.


Introduction
As the basis of biological properties and heredity, the genome of a species is a valuable resource for numerous studies. However, there are subtle differences between individuals of the same species, which are of academic and economic interest as these determine phenotypic differences. Dropping sequencing costs boosted high-throughput sequencing projects, thus facilitating the analysis of this genetic diversity. Variations within the A. thaliana population were studied in the 1001 genomes project [1]. As the number of high-quality reference genome sequences rises continuously, the number of re-sequencing projects increases as well [2]. There are pangenome projects for various species focusing on the genome evolution [3][4][5] and mapping-by-sequencing projects which focus on agronomically important traits of crops [6][7][8][9].
An accurate and comprehensive identification of sequence variants between a sample and the reference sequence is the major challenge in many re-sequencing projects [10]. The large amount and diverse nature of NGS-data types (as reviewed in [11]), the diversity of bioinformatics algorithms, and the quality of the reference genome sequence render the choice of the best approach challenging. substantial differences in the nucleotide composition, a dedicated benchmarking on plant genome sequences is advised. A recent study compared the performance of BWA-MEM [22], SOAP2 [25], and Bowtie2 [21] with the two variant callers GATK [30][31][32][33] and SAMtools/BCFtools [28] on simulated and real tomato datasets [48]. To expand the sparse knowledge about the performance of other read mapping and variant calling tools on plant data, we set out to perform a systematic comparison. Due to the availability of excellent genomic resources, we selected the well-established plant model organism Arabidopsis thaliana for our study. While A. thaliana is not a perfect representative of all plants, the genome shows the characteristically high proportion of AT. Despite limitations in heterochromatic and centromeric regions [49], many plant repeats are resolved in the high-quality genome sequence of A. thaliana. Our study evaluated the performance of 50 variant calling pipelines including combinations of five read mappers (Bowtie2, BWA-MEM, CLC-mapper, GEM3, Novoalign) and eight different variant callers (SAMtools/BCFtools, CLC-caller, FreeBayes, GATK v3.8/v4.0/v4.1, LoFreq, SNVer, VarDict, VarScan) that are frequently applied in re-sequencing studies. Many combinations perform almost equally well on numerous datasets of the plant model organism A. thaliana. Illumina sequence reads were used for the detection of variants and provide the foundation for the comparison of these pipelines. Independent PacBio long reads were harnessed for the validation of identified variants based on an orthogonal sequencing technology.

General Stats about Mapping of Reads
Six Illumina paired-end sequence read datasets [50,51] from A. thaliana Nd-1 and one control sample of Col-0 [50] were processed using all combinations of five read mapping and eight different variant calling tools (including three different versions of one tool) to evaluate the mapping percentage as well as precision, sensitivity, and specificity of each variant calling pipeline. Due to these combinations (7 × 5 × 10), 350 variant calling sets were generated in this study. Overall, the sequence read quality of the processed datasets was high ranging from an average Phred score of 35 to 38 (Table S1).
We observed only minor differences between the different sequence read datasets with respect to the percentage of properly aligned read pairs ( Figure S1). In general, a higher proportion of the 2 × 300 nt paired-end reads was mapped, ranging from 94.8% to 99.5%, while the 2 × 250 nt and the 2 × 100 nt paired-end reads resulted in mapping proportions ranging from 92.7% to 99.6% and 90.0% to 99.1%, respectively.

Figure 1.
Ratio of mapped sequence read pairs per mapper. Sequence reads of six A. thaliana Nd-1 datasets were mapped to the Col-0 reference genome sequence TAIR10. The average ratio of aligned read pairs was calculated for Bowtie2, BWA-MEM, the mapping function in CLC Genomics Workbench (CLC), GEM3, and Novoalign based on all six datasets through the flagstats function of SAMtools. The width of the violin plots is proportional to the density of the data points. The boxplots inside the violin plots indicate quantiles and the white dot indicates the median.
The quality of a variant call set is determined by the transition/transversion ratio, as a worse variant call set tends to have a lower transition/transversion ratio [52]. While most variant callers showed a similar transition/transversion ratio with a median ranging from 1.256 (LoFreq) to 1.288 (VarDict), SNVer revealed a lower median of 1.2 and especially FreeBayes performed worst, showing a median of 1.15 ( Figure 2). In addition, FreeBayes revealed the highest variation ranging from 0.92 to 1.31.
The quality of a variant call set is determined by the transition/transversion ratio, as a worse variant call set tends to have a lower transition/transversion ratio [52]. While most variant callers showed a similar transition/transversion ratio with a median ranging from 1.256 (LoFreq) to 1.288 (VarDict), SNVer revealed a lower median of 1.2 and especially FreeBayes performed worst, showing a median of 1.15 ( Figure 2). In addition, FreeBayes revealed the highest variation ranging from 0.92 to 1.31.
In order to analyze whether a variant caller identifies relatively more SNVs than InDels, the ratio of SNVs to SNVs and InDels was calculated per variant caller (  In order to analyze whether a variant caller identifies relatively more SNVs than InDels, the ratio of SNVs to SNVs and InDels was calculated per variant caller (    To infer whether a variant caller is more prone to call small or large InDels, the distribution of InDel lengths was inspected ( Figure 4). Especially VarDict called very large insertions (up to 981 bp) and very large deletions (up to 998 bp), which are likely to be artifacts since they are filtered out in the corresponding validated call set ( Figure S2). VarScan (134 to −93), SNVer (134 to −95), CLC-caller (156 to −95), LoFreq (168 to −109), and BCFtools (216 to −108) showed a narrower range of InDel To infer whether a variant caller is more prone to call small or large InDels, the distribution of InDel lengths was inspected ( Figure 4). Especially VarDict called very large insertions (up to 981 bp) and very large deletions (up to 998 bp), which are likely to be artifacts since they are filtered out in the corresponding validated call set ( Figure S2). VarScan (134 to −93), SNVer (134 to −95), CLC-caller (156 to −95), LoFreq (168 to −109), and BCFtools (216 to −108) showed a narrower range of InDel lengths.  A gold standard, comprising variants which have been validated through orthogonal data, was used for benchmarking (see methods for details). In order to compare the performance of different variant calling pipelines, we calculated the sensitivity, specificity, accuracy, precision, and F1 score ( Table 1, Table S2). GATK revealed the highest accuracy in combination with most mappers. The only exception is the combination of GEM3 and VarScan, which performed better than any GATK version ( Figure 5). GATK worked best on alignments produced by BWA-MEM and Novoalign. All three evaluated GATK versions (v3.8, v4.0, and v4.1) showed an almost identical performance. In general, Novoalign reached the best (median) results with respect to accuracy. The only exceptions are CLCcaller and VarScan based on alignments produced by CLC-mapper and GEM3, respectively. While Bowtie2 was identified to yield high specificity with most variant callers, it showed a low accuracy with most variant callers except for FreeBayes and VarDict. A gold standard, comprising variants which have been validated through orthogonal data, was used for benchmarking (see methods for details). In order to compare the performance of different variant calling pipelines, we calculated the sensitivity, specificity, accuracy, precision, and F1 score ( Table 1, Table S2). GATK revealed the highest accuracy in combination with most mappers. The only exception is the combination of GEM3 and VarScan, which performed better than any GATK version ( Figure 5). GATK worked best on alignments produced by BWA-MEM and Novoalign. All three evaluated GATK versions (v3.8, v4.0, and v4.1) showed an almost identical performance. In general, Novoalign reached the best (median) results with respect to accuracy. The only exceptions are CLC-caller and VarScan based on alignments produced by CLC-mapper and GEM3, respectively. While Bowtie2 was identified to yield high specificity with most variant callers, it showed a low accuracy with most variant callers except for FreeBayes and VarDict.
In general, the sensitivity of the variant calling pipelines ranged from 0.0219 (Bowtie2/CLC-caller) to 0.6038 (BWA-MEM/VarDict) and the specificity from 0.99450 (CLC-mapper/FreeBayes) to 0.999961 (Bowtie2/CLC-caller) ( Figures S3 and S4). Moreover, we observed a negative correlation of −0.8 between specificity and sensitivity, indicating that a pipeline with a high sensitivity showed a low specificity and vice versa. Almost every variant caller, except for VarDict, showed the lowest specificity when used in combination with CLC-mapper, while in parallel these combinations had one of the highest sensitivities. VarDict showed the highest specificity, but lowest sensitivity with Bowtie2 and performed inferior to GEM3 in terms of specificity, while BWA-MEM reached the best results in sensitivity. In general, the sensitivity of the variant calling pipelines ranged from 0.0219 (Bowtie2 / CLCcaller) to 0.6038 (BWA-MEM / VarDict) and the specificity from 0.99450 (CLC-mapper / FreeBayes) to 0.999961 (Bowtie2 / CLC-caller) ( Figure S3, Figure S4). Moreover, we observed a negative correlation of −0.8 between specificity and sensitivity, indicating that a pipeline with a high sensitivity showed a low specificity and vice versa. Almost every variant caller, except for VarDict, showed the lowest specificity when used in combination with CLC-mapper, while in parallel these combinations had one of the highest sensitivities. VarDict showed the highest specificity, but lowest sensitivity with Bowtie2 and performed inferior to GEM3 in terms of specificity, while BWA-MEM reached the best results in sensitivity.
All tested GATK versions (v3.8, v4.0, and v4.1) showed a very high sensitivity and were only outperformed by specific VarDict samples, namely the 2×100 nt paired-end dataset independent of the choice of the mapper, which reached up to 0.6038 sensitivity (BWA-MEM / VarDict-SRR2919279). However, the specificity of GATK was inferior to some other variant callers. Only minor differences were observed between the three evaluated GATK versions regarding both sensitivity and specificity. The use of different mappers had a substantially higher impact than the applied GATK version.
Followed by GATK, FreeBayes showed a good performance in terms of sensitivity and robust results across all mappers, whereas the other variant callers showed a low performance in combination with Bowtie2. CLC-caller, VarScan, and LoFreq revealed a great variation with respect to sensitivity across all mappers, while GATK and especially VarDict displayed very low variance in their results. When focusing on median sensitivity, the following combinations showed the best results: CLC-mapper / CLC-caller, GEM3 / VarScan, CLC-mapper / SNVer, CLC-mapper / LoFreq, CLC-mapper / GATK v3.8, CLC-mapper / GATK v4.0, CLC-mapper / GATK v4.1, CLC-mapper / BCFtools, GEM3 / FreeBayes, and BWA-MEM / VarDict. However, in terms of median specificity, all variant callers revealed the best results in combination with Bowtie2, except for FreeBayes, which worked best with Novoalign. Moreover, FreeBayes showed the lowest performance and largest variation across all mappers ( Figure S3).
Finally, the harmonic mean of precision and sensitivity, namely the F1 score, was analyzed ( Figure S5). Novoalign in combination with GATK revealed the best mean performance with respect However, the specificity of GATK was inferior to some other variant callers. Only minor differences were observed between the three evaluated GATK versions regarding both sensitivity and specificity. The use of different mappers had a substantially higher impact than the applied GATK version.
Followed by GATK, FreeBayes showed a good performance in terms of sensitivity and robust results across all mappers, whereas the other variant callers showed a low performance in combination with Bowtie2. CLC-caller, VarScan, and LoFreq revealed a great variation with respect to sensitivity across all mappers, while GATK and especially VarDict displayed very low variance in their results. When focusing on median sensitivity, the following combinations showed the best results: CLC-mapper/CLC-caller, GEM3/VarScan, CLC-mapper/SNVer, CLC-mapper/LoFreq, CLC-mapper/GATK v3.8, CLC-mapper/GATK v4.0, CLC-mapper/GATK v4.1, CLC-mapper/BCFtools, GEM3/FreeBayes, and BWA-MEM/VarDict. However, in terms of median specificity, all variant callers revealed the best results in combination with Bowtie2, except for FreeBayes, which worked best with Novoalign. Moreover, FreeBayes showed the lowest performance and largest variation across all mappers ( Figure S3).
Finally, the harmonic mean of precision and sensitivity, namely the F1 score, was analyzed ( Figure S5). Novoalign in combination with GATK revealed the best mean performance with respect to the F1 score. Again, different GATK versions showed almost identical performance (Table 1).

Discussion
The major challenge in many pangenome and re-sequencing projects is the accurate and comprehensive identification of sequence variants. Due to the high diversity and complexity of plant genomes and their differences to animal (e.g., human) genomes, variant callings in plant research differ substantially from those in human and biomedical research. Most human benchmarking studies focus on calling SNVs of certain tumor cells in a highly diverse cell set [20,42,43]. In contrast, plant studies usually use the whole cell set derived from one plant without genomic differences between cells. However, the sequencing of pooled DNA from multiple plants aims at the identification of low frequency SNVs. Large amounts and different NGS data types (as reviewed in [11]), the diversity of bioinformatic algorithms, and the quality of the reference genome sequences render the choice of the best approach challenging. Hence, we performed a benchmarking study to provide comparable data showing the performance of combinations of frequently applied mappers and variant callers (variant calling pipelines) on plant datasets. A previous report [48] is extended by providing data about the performance of additional tools both for the mapping and variant calling step.
To allow for a consistent comparison of baseline performance, we used default parameters for all tools as these parameters are frequently chosen in plant science applications [4,5,9]. Sequence read datasets from different sequencing platforms, with different read lengths, and different sizes were processed to ensure a realistic benchmarking of tools. Since all evaluated tools can process a real plant dataset within a day, we refrained from assessing the computational costs of the analysis. There is usually a trade-off between quality of the results and computational costs. In our experience, many plant scientists select the workflow leading to the best results independent of computational costs [51].
The first step in a variant calling pipeline is the alignment (mapping) of reads to a reference sequence. While the mapping of 2 × 250 nt paired-end reads resulted in a higher mapping percentage, the performance difference to 2 × 100 nt reads is only about 10%. As different sequencing platforms were used for the data generation, per base quality might contribute to this difference. It is expected that longer reads are aligned with higher specificity and hence improve the following variant calling.
The quality of the variant calling sets was assessed by the transition/transversion (ti/tv) ratio which was previously described as a quality indicator [52]. Overall, the quality of almost all analyzed call sets was similar. A previous benchmarking study with SAMtools and GATK reported similar ti/tv ratios for all pipelines [53]. A filtering step increased the ti/tv ratios, indicating that the filtering increased the quality of the call sets [53]. This observation is in line with our findings, which revealed an increased ti/tv for the filtered call sets reduced to variants present in the gold standard ( Figure S6, Table S3). As FreeBayes showed a substantial increase in the quality through filtering, we recommend checking the ti/tv ratio when applying FreeBayes. This effect might be dataset specific.
The choice of the variant caller is crucial for the number of called SNVs, MNVs, and InDels. For example, only CLC-caller, VarDict and FreeBayes were able to call MNVs, thus being more suitable for the identification of structural differences. Furthermore, variant caller results differ with respect to the ratio of SNVs to InDels, which should be considered depending on the specific requirements of the respective sequencing project. BCFtools called relatively more SNVs than InDels, while GATK revealed relatively more InDels. A unique property of VarDict was the detection of InDels up to almost 1 kb which exceeds the read length. Since the accurate identification of such large variants, which are longer than the average read length, is still a challenging task [54], many of these variants are likely false positives. Moreover, the reduced amount of large insertions in the validated call sets of VarDict supports this assumption.
Depending on the application, a pipeline with a high sensitivity or high specificity is desired. In terms of sensitivity, GATK in combination with CLC-mapper, Novoalign, and BWA-MEM yielded the best and most consistent results across all evaluated datasets. These results are in line with a recent study showing that GATK often outperformed SAMtools in terms of sensitivity, precision, and called raw InDels [48]. Similar results had been shown in rice, tomato, and soybean [48], indicating that GATK is also suitable for various crop species with complex genomes. A high sensitivity is essential when a high number of true positive variations accelerates the power of the analyses, e.g., when looking for a detrimental variation between two samples. In this study, a pipeline comprising Bowtie2 and LoFreq resulted in the highest specificity and can thus be recommended. In contrast, a high specificity is desired in mapping-by-sequencing (MBS) projects, as a high proportion of true positives can keep the signal to noise ratio high. Combining both performance metrics by analyzing the accuracy, the best results were obtained with Novoalign and GATK. The same pipeline yielded the best results regarding the harmonic mean of precision and sensitivity (F1 score). Differences observed between the three evaluated GATK versions (v3.8, v4.0, and v4.1) were negligible. However, functionalities and computational performance might differ between these versions.
In summary, this benchmarking study provides insights into the strengths and weaknesses of different variant calling pipelines when applied on plant NGS datasets. Although the performance of all evaluated tools will differ between samples depending on properties of the read datasets and the genome sequence, we hope that our findings serve as a helpful guide.

Sequence Read Datasets
We used paired-end Illumina reads from two different A. thaliana accessions, namely Columbia-0 (Col-0) and Niederzenz-1 (Nd-1) ( Table S1). The read quality was checked via FastQC [12] (Table S1). Trimmomatic [16] was applied for light trimming and quality conversion to a Phred score of 33 if applicable. These datasets cover different Illumina sequencing platforms including GAIIx, MiSeq, and HiSeq 1500. While two datasets are the paired-end proportions of mate pair sequencing libraries (SRR2919288 and SRR3340911), these samples are 2 × 250 nt paired-end libraries.

Sequence Read Mapping
We chose five popular read mappers, namely Bowtie2 v2.3.4.3 [21], BWA-MEM v0.7.17 [22], CLC Genomics Workbench v11 (Qiagen), GEM3 v3.6 [23], and Novoalign v3.09.01 [24] for this analysis. While most of these mappers are freely available for academic use, CLC is a proprietary software suite for genomic analyses. Paired-end reads were mapped against the TAIR10 reference genome sequence [55]. The executed commands for each tool can be found in Table S4. SAMtools v.1.8 [28] was deployed for sorting of the BAM files. Reads originating from PCR duplicates where removed via MarkDuplicates in Picard-bf40841 [56]. Read groups or InDel qualities were assigned as these are required by some tools for downstream processing. While the plastome and chondrome sequences were included during the mapping step, variant caller performance was only assessed for the five chromosome sequences of the nucleome.

Performance Measure of Variant Calling Pipelines
The overall workflow of our benchmarking study is presented in Figure 6. We applied a previously described pipeline to validate sequence variants against the Nd-1 de novo assembly based on PacBio reads [57], which is crucial in order to assess the performance of each variant calling pipeline. This Nd-1 genome sequence assembly is of high quality due to a high PacBio read coverage of about 112-fold and additional polishing with about 120-fold coverage of accurate short reads [58]. A gold standard was generated from all validated variants by combining them into a single VCF file [59]. Afterwards, the numbers of true positives, true negatives, false positives, and false negatives were calculated based on the gold standard and the initial VCF files for each variant calling pipeline and dataset. Next, performance statistics including sensitivity, specificity, precision, accuracy, and F1 score were calculated per combination of mapper, variant caller, and dataset (Table 1).
Plants 2020, 9, x FOR PEER REVIEW 12 of 15 Figure 6. Workflow for the performance analysis of variant calling pipelines. First, reads within supplied FASTQ files were mapped against the TAIR10 A. thaliana reference genome sequence. Next, variants were called and saved in VCF files. All variants were subjected to a previously described validation process based on the Nd-1 genome sequence [58]. A gold standard was generated based on all validated variants. The initial variants called by each combination of mapper and variant caller were evaluated by analyzing whether they are present or absent in the gold standard. The numbers of SNVs, MNVs, and InDels were retrieved from the validated and from the initial VCF files (Table  S2, Table S3). Next, true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN) were calculated for SNVs, MNVs, and InDels identified by each combination of mapper and variant caller. Finally, performance statistics, such as F1 score, sensitivity, specificity, precision, and accuracy were calculated.

Supplementary Materials:
The following are available online at https://www.mdpi.com/xxx/s1, Table S1: Overview of analyzed datasets with SRR identifiers.  Figure S1: Ratio of mapped read pairs per dataset. Figure S2: Distribution of InDel lengths in the validated call sets per variant caller. Figure S3: Specificity of variant calling pipelines. Figure S4: Sensitivity of variant calling pipelines. Figure S5: F1 score of variant calling pipelines. Figure   . Workflow for the performance analysis of variant calling pipelines. First, reads within supplied FASTQ files were mapped against the TAIR10 A. thaliana reference genome sequence. Next, variants were called and saved in VCF files. All variants were subjected to a previously described validation process based on the Nd-1 genome sequence [58]. A gold standard was generated based on all validated variants. The initial variants called by each combination of mapper and variant caller were evaluated by analyzing whether they are present or absent in the gold standard. The numbers of SNVs, MNVs, and InDels were retrieved from the validated and from the initial VCF files (Tables S2  and S3). Next, true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN) were calculated for SNVs, MNVs, and InDels identified by each combination of mapper and variant caller. Finally, performance statistics, such as F1 score, sensitivity, specificity, precision, and accuracy were calculated.

Supplementary Materials:
The following are available online at http://www.mdpi.com/2223-7747/9/4/439/s1, Table S1: Overview of analyzed datasets with SRR identifiers.  Figure S1: Ratio of mapped read pairs per dataset. Figure S2: Distribution of InDel lengths in the validated call sets per variant caller. Figure S3: Specificity of variant calling pipelines. Figure S4: Sensitivity of variant calling pipelines. Figure S5: F1 score of variant calling pipelines. Figure