Next Article in Journal
Retrospective Proteomic Analysis of Cellular Immune Responses and Protective Correlates of p24 Vaccination in an HIV Elite Controller Using Antibody Arrays
Previous Article in Journal
Small-Molecule Inhibition of Rho/MKL/SRF Transcription in Prostate Cancer Cells: Modulation of Cell Cycle, ER Stress, and Metastasis Gene Networks
Previous Article in Special Issue
A Mismatch EndoNuclease Array-Based Methodology (MENA) for Identifying Known SNPs or Novel Point Mutations

Microarrays 2016, 5(2), 12;

Advantages of Array-Based Technologies for Pre-Emptive Pharmacogenomics Testing
School of Biomedical Sciences and Pharmacy, The University of Newcastle, Callaghan 2308, Australia
The Bosch Institute and Discipline of Physiology, University of Sydney, Sydney 2006, Australia
Institute of Genetics and Cytology, School of Life Sciences, Northeast Normal University, Changchun 130024, China
Author to whom correspondence should be addressed.
Academic Editors: Yuriy Alekseyev and Gang Liu
Received: 29 February 2016 / Accepted: 17 May 2016 / Published: 28 May 2016


As recognised by the National Institutes of Health (NIH) Precision Medicine Initiative (PMI), microarray technology currently provides a rapid, inexpensive means of identifying large numbers of known genomic variants or gene transcripts in experimental and clinical settings. However new generation sequencing techniques are now being introduced in many clinical genetic contexts, particularly where novel mutations are involved. While these methods can be valuable for screening a restricted set of genes for known or novel mutations, implementation of whole genome sequencing in clinical practice continues to present challenges. Even very accurate high-throughput methods with small error rates can generate large numbers of false negative or false positive errors due to the high numbers of simultaneous readings. Additional validation is likely to be required for safe use of any such methods in clinical settings. Custom-designed arrays can offer advantages for screening for common, known mutations and, in this context, may currently be better suited for accredited, quality-controlled clinical genetic screening services, as illustrated by their successful application in several large-scale pre-emptive pharmacogenomics programs now underway. Excessive, inappropriate use of next-generation sequencing may waste scarce research funds and other resources. Microarrays presently remain the technology of choice in applications that require fast, cost-effective genome-wide screening of variants of known importance, particularly for large sample sizes. This commentary considers some of the applications where microarrays continue to offer advantages over next-generation sequencing technologies.
microarray; next-generation sequencing; pharmacogenomics; personalized healthcare

1. Introduction

Many reviews cover the advantages of emerging genome-scale sequencing technologies in diverse contexts and these will not be revisited here. Yet, in the rush to adopt these promising new technologies, researchers and funding bodies sometimes fail to recognize that, in some contexts, the use of these platforms is unjustifiable and that high-density genotyping arrays continue to be a far more appropriate choice. Fortunately, since these sequencing technologies are frequently still very costly, there is increasing awareness that the newer approaches are not always better.
For example, the practical advantages of high-density genotyping arrays over genome-scale sequencing in studies involving large numbers of samples have been acknowledged in the lead-up to implementation of President Obama’s PMI. This is summarized in the September 2015 report of the PMI Working Group to the Advisory Committee to the Director of the US NIH [1]. The Working Group noted that, in most circumstances, issues with the cost, imperfect results and expectation of technology obsolescence made genome-scale sequencing approaches presently inappropriate for large numbers of individuals. The Working Group recommends ongoing monitoring to assess when the balance of the scientific value over the costs and capabilities of such methods reaches a “tipping point”. Meanwhile, as recognized by the Committee, reasonable utility can be achieved at an affordable cost by high-density genome-wide arrays testing common and rare gene variants.
As addressed elsewhere in the Special Issue on “Microarrays in the Era of Next Generation Sequencing”, microarrays continue to be used in diverse applications. Examples include detection of chromosomal abnormalities in cytogenetics using array-based comparative genome hybridization, in providing rapid turnaround for prenatal investigations using limited amounts of DNA and in replacing other technologies such as fluorescence in situ hybridization (FISH) for some oncology applications. In the present article we will focus primarily on the area of pharmacogenomics, where arrays are now being widely used both in basic research and in research into the effective translation of pharmacogenomics into clinical practice.
While microarrays do not enable new gene variants to be discovered and are, therefore, generally not well-suited to clinical genetics applications seeking to identify novel disease-associated variants, high throughput genome-wide array technology can still provide the capacity to simultaneously assess essentially all single nucleotide polymorphisms (SNPs) of known functional importance in the human genome [2,3,4,5,6]. This level of genomic coverage is sufficient for many current applications in medicine and research, including most pharmacogenomics applications, as will be discussed in more detail below. The rapid output, affordability, and availability of microarray technology, along with its high accuracy and established and validated pipelines for data analysis and variant calling, make it the logical choice for such applications [7].
This is particularly true for large sample sizes, for example in large genome wide association studies (GWAS), where microarrays have been, and in most instances continue to be, the only economically viable option. Using microarrays, scientists from around the globe can contribute data of various kinds (including genomic, epigenetic, and transcriptomic data) to massive consortium project, even when only able to afford to study a small number of samples. Although sample size restricts the capacity to detect association in small studies, analyses of collective pooled sample sets can be extremely powerful. Hence, while next-generation sequencing (NGS) technologies are essential for discovery-driven research focused on the identification of novel sequences, it may often be unnecessary and even potentially a fiscally irresponsible misuse of research funding [7] to use such methods for profiling common variants (e.g., SNPs) across a large number of patients or in a variety of other applications where detecting novel sequences is not the primary goal.
Although less relevant in the context of this article, array technology is not only still useful for genomic studies but also continues to offer many advantages for various other kinds of high-throughput studies, including transcriptomics, where it remains the platform of choice for many studies. For example, in 2014, RNA-seq data was uploaded into the Gene Expression Omnibus (GEO) database for around 9000 samples whereas microarray data was uploaded for over 54,000 samples [8]. Microarray-based clinical tests provide a powerful tool for simultaneous measurement of the relative expression levels of a large number of well-established clinically relevant genes in the context of disease or drug responses. There is a wide range of applications for gene expression microarrays in providing RNA profiles associated with different disease states for various purposes, including monitoring pharmacological responses in clinical trial participants and identifying suitable drug treatments for individual patients, as reviewed elsewhere [9,10,11].
In view of such considerations, the relatively high costs of sequencing often appear hard to justify in a climate where increasing numbers of researchers are losing funding. Even ignoring the often higher cost of consumables and equipment for NGS as compared to microarray, the greatest cost often lies in the labour. The cost of next-generation whole-genome and transcriptome sequencing is dropping rapidly, and may one day match the cost of microarray-based methods. However the frequent claims of the $1000 genome or even of costs comparable to those of arrays usually do not adequately take into account the cost of time and human resources in sample preparation, sequence alignment, and filtering through huge volumes of data to catalogue SNPs or other information of interest, let alone the infrastructure required for sequencing, data processing, and storage [7]. Microarrays, therefore, continue to provide a highly cost-effective choice in contexts involving samples from relatively large groups of individuals, such as pharmacogenomics. This review will primarily consider the enduring value of microarrays in pharmacogenomics. We will, first, very briefly review the current status of pharmacogenomics in clinical practice before going on to consider criteria that a genotyping platform will need to meet to be relevant to clinical pharmacogenomics in the future. We will consider how microarrays measure up to these criteria and briefly discuss some examples of successful applications of microarray in research into the effective translation of pharmacogenomics into clinical practice.

2. Pharmacogenomics in Practice

As defined by the Food and Drug Administration (FDA), pharmacogenomics studies variations of DNA (genomic) and RNA (transcriptomic) characteristics as related to drug response, providing information which can be used to inform appropriate drug selection or dosage regimens for individual patients [12]. This relies on the identification of SNPs and other variants in genes known to be important in pharmacokinetics or pharmacodynamics. Considerable ongoing research focuses on identifying and profiling these variants; however, currently only a few gene variants are considered to have a firm evidence base for clinical actionability.
For most drugs, information on clinically-actionable gene variants (for which either a change of medication or a change of dose are recommended) can be obtained by screening only a small portion of the genome. The guidelines of the Clinical Pharmacogenetics Implementation Consortium (CPIC), supported by the US NIH and available through the Pharmacogenomics Knowledge Base (PharmGKB) [13], list only 17 genes with “high” evidence (Level 1A or 1B) of a drug-modifying effect (see Table 1), with “moderate” evidence (Level 2A or 2B) for an additional 40 genes. While further research is likely to reveal a number of other variants that modify the pharmacokinetic or pharmacodynamics profiles of new or existing drugs, the degree of screening required is, therefore, unlikely to extend beyond the capabilities of microarray technology for some time into the future.
Several microarray-based tests that simultaneously examine variations in multiple genes are approved by the FDA and have entered practice. These include AmpliChip CYP450 from Roche and MammaPrint from Agendia. Although whole genome sequencing and whole exome sequencing of potential pharmacogenomic gene variants have been reported previously [33,34], as far as we are aware the first and, to date, the only FDA-cleared NGS platform for in vitro diagnostic testing is a single gene test only, specifically a cystic fibrosis mutation detection test utilizing the Illumina MiSeqDx System [35]. For these and other reasons described elsewhere in this article, microarrays are likely to continue to be relevant and beneficial for clinical practice for some time into the future.

3. Minimum Criteria for a Clinically Useful Pharmacogenomics Platform

In practical terms, irrespective of the technology used, the ideal pharmacogenomics platform should meet the following minimum criteria [36,37].

3.1. Analytical Validity

Ideally, a pharmacogenomics test should have high analytical specificity and sensitivity, with appropriate laboratory quality assurance and assay robustness. The data generated should be highly accurate with minimal errors in calling of gene variants. However, accuracy issues continue to restrict the usefulness of NGS. Even the most advanced sequencing platforms still have a base call error rate that, although usually proportionately small compared to many other technologies, is amplified by the large number of reads performed in an NGS experiment. This can make it difficult to distinguish polymorphisms from sequencing errors [38,39,40].
Various kinds of bias affect the analytic validity of NGS data [38,41]. Systematic bias involves non-random errors arising because of inaccuracies inherent in the platform and associated protocols, including errors deriving from the methods used to generate the original sequencing library [38]. Systematic errors can also reflect coverage bias, which may occur in regions where the genome sequence, chemistry, or conformation affects data output. This form of bias can in part be platform-dependent but can also occur across platforms and the error involved can be substantial—for example, a 2012 study by Quail and colleagues [42] of three platforms, Ion Torrent Personal Genome Machine, Pacific Biosciences PacBio RS, and Illumina MiSeq, found that output from sequencing extremely AT-rich genomes contained high levels of bias and errors with no coverage of almost 30% of the genomes investigated. Another important component of systematic bias—pertinent to laboratory quality assurance and assay robustness—isbatch effects relating to external factors such as reagent variability [38,41].
Sequencing accuracy for leading longer established technologies such as Illumina is often over 99% [43,44]. For single nucleotide variants differing from the reference genotype, the error rates for whole-genome and whole-exome sequencing of Illumina HiSeq or Complete Genomics have been estimated to be up to 0.1% or 0.6%, respectively, using replicate high-coverage sequencing of human blood and saliva DNA samples [39] and advances such as the HiSeq X Ten model and the Complete Genomics Long Fragment Reads technology [45] are achieving considerably better rates.
These are relatively well-established technologies which have been in use and evolving for some time, facilitating development of expertise and optimisation of protocols. However, in general, these technologies tend to be relatively costly compared to some of the other platforms, although these are sometimes less accurate with higher error rates [38,43,46]. Sequencing accuracy of the PacBio platform has been reported to be in the range of 80%–90% [43,47], with a study comparing three important platforms—Ion Torrent Personal Genome Machine, Pacific Biosciences PacBio RS and Illumina MiSeq (reviewed in more detail in the study in question)—on a set of four microbial genomes observing error rates of below 0.4% for the Illumina platform, 1.78% for Ion Torrent and 13% for PacBio sequencing [42]. The number of error-free reads, without a single mismatch or insertion and deletion (indel), was 76.45%, 15.92%, and 0% for MiSeq, Ion Torrent, and PacBio, respectively. The PacBio errors were evenly distributed, whereas MiSeq produced more errors after long (>20-base) homopolymer tracts or for GC-rich motifs. The affordable and widely used Ion Torrent platform produced erroneous base numbers for homopolymers >8 bases long and failed to generate reads entirely for long (>14-base) homopolymer tracts, along with strand-specific errors that were not associated with any obvious motif.
The long reads and low error rates of early NGS platforms such as Roche 454 sequencers made error correction relatively unimportant [38]. Most error-correction programs have primarily addressed substitution errors, since these have been an issue for the widely used Illumina machines; however, short read platforms, such as Ion Torrent, are more prone to other sorts of errors, such as indels [48,49,50]. As approaches to error correction are refined for each emerging technology, the accuracy of the output is likely to improve considerably; one of the potentially most exciting new developments, the MinION, has a raw sequencing error rate of about 12% which can be improved to 0%–3% with hybrid or de novo error correction [51]. Such errors are often relatively unimportant for discovery-based applications in research settings or clinical investigations to identify disease-related mutations in families, where candidate variants can be validated using a range of other approaches. However, such errors are more of a problem in clinical pharmacogenomic contexts requiring fast and reliable decisions about medications, where microarrays and, in particular, validated custom-designed arrays for pharmacogenomics and other applications can offer more reliable options [10,34,52].

3.2. Clinical Validity and Utility

While, as discussed above, a test must be able to evaluate the measure of interest accurately (analytical validity), a test only has clinical validity if what is being measured correlates closely with some clinical outcome of interest. In the context of pharmacogenomics, clinical validity translates to the ability of a genomic test to detect or predict the response to a drug correctly and with high specificity and sensitivity.
Even though a test may have analytical and clinical validity, it still need not necessarily be clinically useful if, for example, the information provided by the test does not serve any useful purpose for the physician, the patient or other relevant stakeholders. The clinical utility of a genomic test, in the broadest sense, can be considered to refer to the usefulness of the information it provides in enabling clinicians, patients or other stakeholders to make appropriate health-related decisions; a comprehensive list of factors influencing clinical utility is provided by the Centres for Disease Control and Prevention (CDC) [53]. Relevant considerations include whether appropriate equipment, expertise and validated educational materials are available to allow effective use of test results in healthcare decision making. In the present context the concept of clinical utility is used primarily with regard to the requirement that a pharmacogenomics test should provide information of value to decision making by health professionals or patients. For example, the test may help decide on applicable interventional approaches.
The availability and accessibility of a test also affect the extent of its usefulness. Ideally, pharmacogenomic test results would be accessible by professionals working at point-of-care (e.g., doctors or pharmacists), using a simple database query that would link the prescribed drug with a patient’s genotype to identify any recommended modifications to the treatment or dose. In our experience, and as evidenced by large-scale pre-emptive clinical pharmacogenomics testing programs described in more detail below [54], pharmacogenomics test results can be relatively easily and rapidly extracted from DNA array data using potentially automatable procedures, and can be interpreted by personnel with relatively little training and experience.
In contrast, one of the main practical barriers to implementation of NGS in clinical practice is the relative scarcity of personnel capable of handling and interpreting NGS data. Expertise in recognizing errors from true calls (which will also reflect analytical validity) is critical in ensuring the correct prescription is received by patients, and avoiding possible claims of negligence. The interpretation of NGS data presently involves extensive and time consuming analyses that require expert human judgement. While similar considerations also apply for microarray data, at present there are relatively well-established automated algorithms and pipelines for array data processing [55], whereas NGS data analysis still more commonly requires considerable human input and judgement [46,56].

4. Additional Technical Issues

Speed is another concern that is often raised. The entire turnaround time, from DNA collection to reporting, should be no longer and, if possible, less than that of standard pathology tests. However, while the speed of the new sequencing technologies continues to increase, the requirements of error correction and other analytical factors have not kept pace and continue to cause bottlenecks in this regard. For example, Yang and colleagues (2013) note the need to improve the run-time of error correction algorithms and validation procedures involving hybrid datasets generated by multiple platforms [38]. However, while fast turnaround will be important for future applications of epigenomics, transcriptomics, or proteomics in diagnostic contexts, where profiles are dynamic and there is a need for rapid assessment of a person’s current status, the genome effectively remains unchanged over time and can therefore be determined in advance (“pre-emptively”) before health problems arise, making turnaround time essentially irrelevant. So, in the context of pharmacogenomics, speed may be a relatively unimportant criterion for pre-emptive testing, although it remains an issue if rapid clinical decision making is required for a patient who has not been previously genotyped.
Infrastructure also requires consideration. Ideally, it should ultimately be possible to perform genotyping and data interrogation in close proximity to the point-of-care e.g., within a hospital laboratory or in community settings such as a pharmacy or GP office. Equipment for running the assay should, therefore, be user-friendly and self-contained, while data analysis and reporting should be compatible with the computing power provided by a standard desktop or laptop computer. Technologies such as Oxford MinION, while still evolving, hold considerable promise in this context [57].
Data storage and linkage are among the most important limiting factors in many clinical contexts. For example, the relevant data generated by a platform should be sufficiently compact to link with electronic health records. Given that current microarray platforms can screen approximately 500,000–1 million or more SNPs or 1 million probes for detecting copy number variation for a few hundred dollars, and that the resulting SNP data can be compressed to a few megabytes, this technology would appear the most appropriate fit based on the above criteria [58]. The need for infrastructure that can handle “big data” is another potential limitation of NGS that makes it currently less feasible for clinical applications. In terms of data storage, the raw compressed fastq files from a single whole genome sequencing run at 30× sequencing depth amount to ~100 gigabytes; on top of this, the aligned, processed files require storage of around 1–1.5 terabytes per patient [59]. While advances in cloud storage and data transmission technologies will help overcome problems associated with storing these massive volumes of data, at least with regard to pharmacogenomics, it is debatable whether this is an economically intelligent use of resources given the small number of known variants that modify drug responses.

5. Ethical, Legal, and Social Issues (ELSI)

Several other issues also need to be taken into account in addition to the above considerations, most notably ELSI. In this regard there are a range of potential concerns, including privacy and unanticipated results. Irrespective of the technology, some of these concerns can be addressed by appropriate consent procedures such as “traffic light” systems similar to those being used in related contexts, such as consent for the release of an individual’s genomic information for research purposes [60]. Such systems can be used to enable people to give different levels of consent for different categories of genomic information. For example, with respect to privacy, a person might elect not to provide certain kinds of genomic information for commercial purposes but may be happy to provide this information for other research purposes. With respect to unanticipated results, a person might elect not to be informed about a mutation predisposing to an untreatable condition. However, it is worth noting that, irrespective of the technology used (microarray or NGS), these issues are far less likely to present concerns for pharmacogenomic tests than for genomic diagnostic testing. The presence of a pharmacogenomically actionable mutation is generally innocuous unless a person takes a medicine affected by the mutation, in which case the information becomes potentially advantageous, reducing the likelihood of an adverse reaction or therapeutic failure.
Another important ethical concern surrounding pharmacogenomics relates to justice and equity. As pharmacogenomics and precision medicine begin to assume pivotal roles in healthcare, there will be increasing need to ensure fair, even distribution of benefits to prevent further widening of the gaps that exist between individuals of different socioeconomic status and, in particular, people from developing or resource-limited countries, who may already be disadvantaged with respect to healthcare. Such countries are also least able to sustain any inefficiencies in healthcare systems arising as a result of drugs that were originally developed for use in other populations not working appropriately because of genomic differences.
Countries from Europe, Africa, Asia, and the Pacific have now initiated the implementation of genomic approaches in health care. In view of the foregoing discussion and taking into consideration the issues of inadequate funds, lack of access to technology, and scarcity of well-trained health experts in developing or resource-limited countries, currently the only viable approach to ensure that these countries are given equitable opportunities to benefit from these new initiatives would appear to be the utilization of microarray techniques, rather than next-generation approaches. Initiatives implementing large-scale pharmacogenomics are now starting to appear worldwide. For example, the European Ubiquitous Pharmacogenomics network [61] project provides information on the prevalence and effects of pharmacogenomically relevant gene variants in Europe with particular focus on developing countries, in order to generate locally-relevant drug dose recommendations [62]. Pharmacogenomics networks are also being set up in Asia, for example the Asian Network for Pharmacogenomics [63]. The Human Heredity and Health in African (H3Africa) initiative, backed by the US NIH, the UK Wellcome Trust, and the African Society of Human Genetics, aims to develop the capacity of African scientists to apply genomic and epidemiological approaches in locally-relevant clinical contexts [62,64].

6. Using Microarrays for Pre-Emptive Pharmacogenomics Testing

One example of successful array-based pharmacogenomics in practice is the Pharmacogenomic Resource for Enhanced Decisions in Care and Treatment (PREDICT) program at Vanderbilt University Medical Centre [54]. The PREDICT program uses panel-based genotyping to identify specific SNPs that are known to have drug-response associations, allowing tailored clinical decision support to be provided for each participant. In an initial study of almost 10,000 participants, which focused on only five well-established drug-gene interactions (clopidogrel—CYP2C19; simvastatin—SLCOB1; warfarin—CYP2C9 and VKORC1; thiopurines—TPMT; tacrolimus—CYP3A5), one or more actionable variants were identified in 91% of genotyped participants [54]. The clinical utility of this approach is, therefore, likely to be considerably greater than single gene tests. In addition, the pre-emptive, panel-based genotyping approach used in this study enabled substantial reduction in the testing burden compared to single gene assays and facilitated the provision of results at the point of care. This study testifies that custom-designed arrays are appropriate for accurate identification of common SNPs in accredited, quality-controlled pharmacogenomic screening services.
As was also noted in the introduction, a recent review has highlighted the success of array-based pre-emptive pharmacogenomics testing in several other US medical centers, namely St Jude’s Children’s Research Hospital, University of Florida and Shands Hospital, the Mayo Clinic, and Mount Sinai Medical Centre [65]. While the genotyping platform varied among the centres, as did the number of genes assayed (ranging from 34 to 230), each program identified a high prevalence of actionable variants. In fact, when considering only 12 pharmacogenes, it is estimated that over 97% of the population of the US have at least one actionable high-risk diplotype [65]. The experience of these trailblazing centres in instituting pre-emptive pharmacogenomics testing has already highlighted challenges and solutions to implementation, paving the way for smooth deployment in other locations.

7. Conclusions

Advances in sequencing technologies have revolutionised genomic discovery in the lab, while concomitant reductions in cost will increase the feasibility of employing such technologies in routine clinical or pharmacy practice. Yet the availability of new technologies should not dictate that its predecessors be discarded for every application. In the context of widespread pharmacogenomics profiling of large numbers of individuals, existing microarray technology offers considerable advantages over sequencing with respect to cost of infrastructure, ease of analysis, interpretation, and logistics of data storage and interrogation. In contrast, NGS offers no obvious advantages over array-based methods for screening large numbers of common variants. While the urge to embrace an exciting new technology as a panacea can sometimes seem irresistible, we hope that common sense will prevail in judging the utility of genomics technologies in a context-dependent manner.

Author Contributions

All authors contributed to literature searching and to writing the review.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Precision Medicine Initiative (PMI) Working Group Report to the Advisory Committee to the Director, NIH. The Precision Medicine Initiative Cohort Program-Building a Research Foundation for 21st Century Medicine; NIH: Bethesda, MD, USA, 2016. [Google Scholar]
  2. Wang, D.G.; Fan, J.B.; Siao, C.J.; Berno, A.; Young, P.; Sapolsky, R.; Ghandour, G.; Perkins, N.; Winchester, E.; Spencer, J.; et al. Large-scale identification, mapping, and genotyping of single-nucleotide polymorphisms in the human genome. Science 1998, 280, 1077–1082. [Google Scholar] [CrossRef] [PubMed]
  3. Cutler, D.J.; Zwick, M.E.; Carrasquillo, M.M.; Yohn, C.T.; Tobin, K.P.; Kashuk, C.; Mathews, D.J.; Shah, N.A.; Eichler, E.E.; Warrington, J.A.; et al. High-throughput variation detection and genotyping using microarrays. Genome Res. 2001, 11, 1913–1925. [Google Scholar] [PubMed]
  4. Sund, K.L.; Zimmerman, S.L.; Thomas, C.; Mitchell, A.L.; Prada, C.E.; Grote, L.; Bao, L.; Martin, L.J.; Smolarek, T.A. Regions of homozygosity identified by SNP microarray analysis aid in the diagnosis of autosomal recessive disease and incidentally detect parental blood relationships. Genet. Med. 2013, 15, 70–78. [Google Scholar] [CrossRef] [PubMed]
  5. Kumar, P.; Al-Shafai, M.; Al Muftah, W.A.; Chalhoub, N.; Elsaid, M.F.; Aleem, A.A.; Suhre, K. Evaluation of SNP calling using single and multiple-sample calling algorithms by validation against array base genotyping and mendelian inheritance. BMC Res. Notes 2014, 7, 747. [Google Scholar] [CrossRef] [PubMed]
  6. Perez-Enciso, M.; Rincon, J.C.; Legarra, A. Sequence- vs. Chip-assisted genomic selection: Accurate biological information is advised. Genet. Sel. Evol. 2015, 47, 43. [Google Scholar] [CrossRef] [PubMed]
  7. Sboner, A.; Mu, X.J.; Greenbaum, D.; Auerbach, R.K.; Gerstein, M.B. The real cost of sequencing: Higher than you think! Genome Biol. 2011, 12, 125. [Google Scholar] [CrossRef] [PubMed]
  8. Su, Z.; Fang, H.; Hong, H.; Shi, L.; Zhang, W.; Zhang, W.; Zhang, Y.; Dong, Z.; Lancashire, L.J.; Bessarabova, M.; et al. An investigation of biomarkers derived from legacy microarray data for their utility in the RNA-seq era. Genome Biol. 2014, 15, 523. [Google Scholar] [CrossRef] [PubMed]
  9. Bates, S. The role of gene expression profiling in drug discovery. Curr. Opin. Pharmacol. 2011, 11, 549–556. [Google Scholar] [CrossRef] [PubMed]
  10. Milward, E.A.; Daneshi, N.; Johnstone, D.M. Emerging real-time technologies in molecular medicine and the evolution of integrated “pharmacomics” approaches to personalized medicine and drug discovery. Pharmacol. Ther. 2012, 136, 295–304. [Google Scholar] [CrossRef] [PubMed]
  11. Anderson, D.C.; Kodukula, K. Biomarkers in pharmacology and drug discovery. Biochem. Pharmacol. 2014, 87, 172–188. [Google Scholar] [CrossRef] [PubMed]
  12. Savers, S. Guidance for industry E15 definitions for genomic biomarkers, pharmacogenomics, pharmacogenetics, genomic data, and sample coding categories. Biotechnol. Law Rep. 2008, 27, 359–363. [Google Scholar]
  13. PharmGKB. Pharmacogenomics Knowledge Implementation. Available online: (accessed on 23 May 2016).
  14. Martin, M.A.; Klein, T.E.; Dong, B.J.; Pirmohamed, M.; Haas, D.W.; Kroetz, D.L. Clinical pharmacogenetics implementation consortium guidelines for HLA-B genotype and abacavir dosing. Clin. Pharmacol. Ther. 2012, 91, 734–738. [Google Scholar] [CrossRef] [PubMed]
  15. Hershfield, M.S.; Callaghan, J.T.; Tassaneeyakul, W.; Mushiroda, T.; Thorn, C.F.; Klein, T.E.; Lee, M.T. Clinical pharmacogenetics implementation consortium guidelines for human leukocyte antigen-B genotype and allopurinol dosing. Clin. Pharmacol. Ther. 2013, 93, 153–158. [Google Scholar] [CrossRef] [PubMed]
  16. Caudle, K.E.; Rettie, A.E.; Whirl-Carrillo, M.; Smith, L.H.; Mintzer, S.; Lee, M.T.; Klein, T.E.; Callaghan, J.T. Clinical pharmacogenetics implementation consortium guidelines for CYP2C9 and HLA-B genotypes and phenytoin dosing. Clin. Pharmacol. Ther. 2014, 96, 542–548. [Google Scholar] [CrossRef] [PubMed][Green Version]
  17. Leckband, S.G.; Kelsoe, J.R.; Dunnenberger, H.M.; George, A.L., Jr.; Tran, E.; Berger, R.; Muller, D.J.; Whirl-Carrillo, M.; Caudle, K.E.; Pirmohamed, M. Clinical pharmacogenetics implementation consortium guidelines for HLA-B genotype and carbamazepine dosing. Clin. Pharmacol. Ther. 2013, 94, 324–328. [Google Scholar] [CrossRef] [PubMed]
  18. Hicks, J.K.; Swen, J.J.; Thorn, C.F.; Sangkuhl, K.; Kharasch, E.D.; Ellingrod, V.L.; Skaar, T.C.; Muller, D.J.; Gaedigk, A.; Stingl, J.C. Clinical pharmacogenetics implementation consortium guideline for CYP2D6 and CYP2C19 genotypes and dosing of tricyclic antidepressants. Clin. Pharmacol. Ther. 2013, 93, 402–408. [Google Scholar] [CrossRef] [PubMed]
  19. Scott, S.A.; Sangkuhl, K.; Gardner, E.E.; Stein, C.M.; Hulot, J.S.; Johnson, J.A.; Roden, D.M.; Klein, T.E.; Shuldiner, A.R. Clinical pharmacogenetics implementation consortium guidelines for cytochrome P450-2C19 (CYP2C19) genotype and clopidogrel therapy. Clin. Pharmacol. Ther. 2011, 90, 328–332. [Google Scholar] [CrossRef] [PubMed]
  20. Hicks, J.K.; Bishop, J.R.; Sangkuhl, K.; Muller, D.J.; Ji, Y.; Leckband, S.G.; Leeder, J.S.; Graham, R.L.; Chiulli, D.L.; A, L.L.; et al. Clinical pharmacogenetics implementation consortium (CPIC) guideline for CYP2D6 and CYP2C19 genotypes and dosing of selective serotonin reuptake inhibitors. Clin. Pharmacol. Ther. 2015, 98, 127–134. [Google Scholar] [CrossRef] [PubMed][Green Version]
  21. Crews, K.R.; Gaedigk, A.; Dunnenberger, H.M.; Klein, T.E.; Shen, D.D.; Callaghan, J.T.; Kharasch, E.D.; Skaar, T.C. Clinical pharmacogenetics implementation consortium (CPIC) guidelines for codeine therapy in the context of cytochrome P450 2D6 (CYP2D6) genotype. Clin. Pharmacol. Ther. 2012, 91, 321–326. [Google Scholar] [CrossRef] [PubMed]
  22. Gammal, R.S.; Court, M.H.; Haidar, C.E.; Iwuchukwu, O.F.; Gaur, A.H.; Alvarellos, M.; Guillemette, C.; Lennox, J.L.; Whirl-Carrillo, M.; Brummel, S.; et al. Clinical pharmacogenetics implementation consortium (CPIC) guideline for UGT1A1 and Atazanavir prescribing. Clin. Pharmacol. Ther. 2016, 99, 363–369. [Google Scholar] [CrossRef] [PubMed]
  23. Relling, M.V.; Gardner, E.E.; Sandborn, W.J.; Schmiegelow, K.; Pui, C.H.; Yee, S.W.; Stein, C.M.; Carrillo, M.; Evans, W.E.; Klein, T.E. Clinical pharmacogenetics implementation consortium guidelines for thiopurine methyltransferase genotype and thiopurine dosing. Clin. Pharmacol. Ther. 2011, 89, 387–391. [Google Scholar] [CrossRef] [PubMed]
  24. Relling, M.V.; Gardner, E.E.; Sandborn, W.J.; Schmiegelow, K.; Pui, C.H.; Yee, S.W.; Stein, C.M.; Carrillo, M.; Evans, W.E.; Hicks, J.K.; et al. Clinical pharmacogenetics implementation consortium guidelines for thiopurine methyltransferase genotype and thiopurine dosing: 2013 update. Clin. Pharmacol. Ther. 2013, 93, 324–325. [Google Scholar] [CrossRef] [PubMed]
  25. Caudle, K.E.; Thorn, C.F.; Klein, T.E.; Swen, J.J.; McLeod, H.L.; Diasio, R.B.; Schwab, M. Clinical pharmacogenetics implementation consortium guidelines for dihydropyrimidine dehydrogenase genotype and fluoropyrimidine dosing. Clin. Pharmacol. Ther. 2013, 94, 640–645. [Google Scholar] [CrossRef] [PubMed]
  26. Clancy, J.P.; Johnson, S.G.; Yee, S.W.; McDonagh, E.M.; Caudle, K.E.; Klein, T.E.; Cannavo, M.; Giacomini, K.M. Clinical pharmacogenetics implementation consortium (CPIC) guidelines for ivacaftor therapy in the context of CFTR genotype. Clin. Pharmacol. Ther. 2014, 95, 592–597. [Google Scholar] [CrossRef] [PubMed]
  27. Johnson, J.A.; Gong, L.; Whirl-Carrillo, M.; Gage, B.F.; Scott, S.A.; Stein, C.M.; Anderson, J.L.; Kimmel, S.E.; Lee, M.T.; Pirmohamed, M.; et al. Clinical pharmacogenetics implementation consortium guidelines for CYP2C9 and VKORC1 genotypes and warfarin dosing. Clin. Pharmacol. Ther. 2011, 90, 625–629. [Google Scholar] [CrossRef] [PubMed]
  28. Relling, M.V.; McDonagh, E.M.; Chang, T.; Caudle, K.E.; McLeod, H.L.; Haidar, C.E.; Klein, T.; Luzzatto, L. Clinical pharmacogenetics implementation consortium (CPIC) guidelines for rasburicase therapy in the context of G6PD deficiency genotype. Clin. Pharmacol. Ther. 2014, 96, 169–174. [Google Scholar] [CrossRef] [PubMed]
  29. Wilke, R.A.; Ramsey, L.B.; Johnson, S.G.; Maxwell, W.D.; McLeod, H.L.; Voora, D.; Krauss, R.M.; Roden, D.M.; Feng, Q.; Cooper-Dehoff, R.M.; et al. The clinical pharmacogenomics implementation consortium: CPIC guideline for SLCO1B1 and simvastatin-induced myopathy. Clin. Pharmacol. Ther. 2012, 92, 112–117. [Google Scholar] [CrossRef] [PubMed]
  30. Ramsey, L.B.; Johnson, S.G.; Caudle, K.E.; Haidar, C.E.; Voora, D.; Wilke, R.A.; Maxwell, W.D.; McLeod, H.L.; Krauss, R.M.; Roden, D.M.; et al. The clinical pharmacogenetics implementation consortium guideline for SLCO1B1 and simvastatin-induced myopathy: 2014 Update. Clin. Pharmacol. Ther. 2014, 96, 423–428. [Google Scholar] [CrossRef] [PubMed]
  31. Birdwell, K.A.; Decker, B.; Barbarino, J.M.; Peterson, J.F.; Stein, C.M.; Sadee, W.; Wang, D.; Vinks, A.A.; He, Y.; Swen, J.J.; et al. Clinical pharmacogenetics implementation consortium (CPIC) guidelines for CYP3A5 genotype and tacrolimus dosing. Clin. Pharmacol. Ther. 2015, 98, 19–24. [Google Scholar] [CrossRef] [PubMed][Green Version]
  32. Muir, A.J.; Gong, L.; Johnson, S.G.; Lee, M.T.; Williams, M.S.; Klein, T.E.; Caudle, K.E.; Nelson, D.R. Clinical pharmacogenetics implementation consortium (CPIC) guidelines for IFNL3 (IL28B) genotype and PEG interferon-α-based regimens. Clin. Pharmacol. Ther. 2014, 95, 141–146. [Google Scholar] [CrossRef] [PubMed]
  33. Mizzi, C.; Peters, B.; Mitropoulou, C.; Mitropoulos, K.; Katsila, T.; Agarwal, M.R.; van Schaik, R.H.; Drmanac, R.; Borg, J.; Patrinos, G.P. Personalized pharmacogenomics profiling using whole-genome sequencing. Pharmacogenomics 2014, 15, 1223–1234. [Google Scholar] [CrossRef] [PubMed]
  34. Chua, E.W.; Cree, S.L.; Ton, K.N.; Lehnert, K.; Shepherd, P.; Helsby, N.; Kennedy, M.A. Cross-comparison of exome analysis, next-generation sequencing of amplicons, and the iPLEX® ADME PGx panel for pharmacogenomic profiling. Front. Pharmacol. 2016, 7, 1. [Google Scholar] [CrossRef] [PubMed]
  35. Sheridan, C. Milestone approval lifts Illumina’s NGS from research into clinic. Nat. Biotechnol. 2014, 32, 111–112. [Google Scholar] [CrossRef] [PubMed]
  36. Grosse, S.D.; Khoury, M.J. What is the clinical utility of genetic testing? Genet. Med. 2006, 8, 448–450. [Google Scholar] [CrossRef] [PubMed]
  37. Haddow, J.E.; Palomaki, G.E. ACCE: A model process for evaluating data on emerging genetic tests. Hum. Genome Epidemiol. 2004, 217–233. [Google Scholar]
  38. Yang, X.; Chockalingam, S.P.; Aluru, S. A survey of error-correction methods for next-generation sequencing. Brief. Bioinform. 2013, 14, 56–66. [Google Scholar] [CrossRef] [PubMed]
  39. Wall, J.D.; Tang, L.F.; Zerbe, B.; Kvale, M.N.; Kwok, P.Y.; Schaefer, C.; Risch, N. Estimating genotype error rates from high-coverage next-generation sequence data. Genome Res. 2014, 24, 1734–1739. [Google Scholar] [CrossRef] [PubMed]
  40. Daber, R.; Sukhadia, S.; Morrissette, J.J. Understanding the limitations of next generation sequencing informatics, an approach to clinical pipeline validation using artificial data sets. Cancer Genet. 2013, 206, 441–448. [Google Scholar] [CrossRef] [PubMed]
  41. Taub, M.A.; Corrada Bravo, H.; Irizarry, R.A. Overcoming bias and systematic errors in next generation sequencing data. Genome Med. 2010, 2, 87. [Google Scholar] [CrossRef] [PubMed]
  42. Quail, M.A.; Smith, M.; Coupland, P.; Otto, T.D.; Harris, S.R.; Connor, T.R.; Bertoni, A.; Swerdlow, H.P.; Gu, Y. A tale of three next generation sequencing platforms: Comparison of Ion Torrent, Pacific Biosciences and Illumina MiSeq sequencers. BMC Genomics 2012, 13, 341. [Google Scholar] [CrossRef] [PubMed]
  43. Hackl, T.; Hedrich, R.; Schultz, J.; Forster, F. Proovread: Large-scale high-accuracy PacBio correction through iterative short read consensus. Bioinformatics 2014, 30, 3004–3011. [Google Scholar] [CrossRef] [PubMed]
  44. Dohm, J.C.; Lottaz, C.; Borodina, T.; Himmelbauer, H. Substantial biases in ultra-short read data sets from high-throughput DNA sequencing. Nucleic Acids Res. 2008, 36, e105. [Google Scholar] [CrossRef] [PubMed]
  45. Peters, B.A.; Kermani, B.G.; Sparks, A.B.; Alferov, O.; Hong, P.; Alexeev, A.; Jiang, Y.; Dahl, F.; Tang, Y.T.; Haas, J.; et al. Accurate whole-genome sequencing and haplotyping from 10 to 20 human cells. Nature 2012, 487, 190–195. [Google Scholar] [CrossRef] [PubMed]
  46. O’Rawe, J.; Jiang, T.; Sun, G.; Wu, Y.; Wang, W.; Hu, J.; Bodily, P.; Tian, L.; Hakonarson, H.; Johnson, W.E.; et al. Low concordance of multiple variant-calling pipelines: Practical implications for exome and genome sequencing. Genome Med. 2013, 5, 28. [Google Scholar] [CrossRef] [PubMed]
  47. Ono, Y.; Asai, K.; Hamada, M. PBSIM: PacBio reads simulator—Toward accurate genome assembly. Bioinformatics 2013, 29, 119–121. [Google Scholar] [CrossRef] [PubMed]
  48. Alic, A.S.; Tomas, A.; Medina, I.; Blanquer, I. Muffinec: Error correction for de novo assembly via greedy partitioning and sequence alignment. Inf. Sci. 2016, 329, 206–219. [Google Scholar] [CrossRef]
  49. Nakamura, K.; Oshima, T.; Morimoto, T.; Ikeda, S.; Yoshikawa, H.; Shiwa, Y.; Ishikawa, S.; Linak, M.C.; Hirai, A.; Takahashi, H.; et al. Sequence-specific error profile of Illumina sequencers. Nucleic Acids Res. 2011, 39, e90. [Google Scholar] [CrossRef] [PubMed]
  50. Hoffmann, S.; Otto, C.; Kurtz, S.; Sharma, C.M.; Khaitovich, P.; Vogel, J.; Stadler, P.F.; Hackermuller, J. Fast mapping of short sequences with mismatches, insertions and deletions using index structures. PLoS Comput. Biol. 2009, 5, e1000502. [Google Scholar] [CrossRef] [PubMed]
  51. Hargreaves, A.D.; Mulley, J.F. Assessing the utility of the Oxford Nanopore MinION for snake venom gland cDNA sequencing. PeerJ. 2015, 3, e1441. [Google Scholar] [CrossRef] [PubMed]
  52. Johnson, J.A.; Burkley, B.M.; Langaee, T.Y.; Clare-Salzler, M.J.; Klein, T.E.; Altman, R.B. Implementing personalized medicine: Development of a cost-effective customized pharmacogenetics genotyping array. Clin. Pharmacol. Ther. 2012, 92, 437–439. [Google Scholar] [CrossRef] [PubMed]
  53. Centers for Disease Control and Prevention. Genomic Testing. Available online: (accessed on 23 May 2016).
  54. Van Driest, S.L.; Shi, Y.; Bowton, E.A.; Schildcrout, J.S.; Peterson, J.F.; Pulley, J.; Denny, J.C.; Roden, D.M. Clinically actionable genotypes among 10,000 patients with preemptive pharmacogenomic testing. Clin. Pharmacol. Ther. 2014, 95, 423–431. [Google Scholar] [CrossRef] [PubMed]
  55. Johnstone, D.M.; Riveros, C.; Heidari, M.; Graham, R.M.; Trinder, D.; Berretta, R.; Olynyk, J.K.; Scott, R.J.; Moscato, P.; Milward, E.A. Evaluation of different normalization and analysis procedures for Illumina gene expression microarray data involving small changes. Microarrays 2013, 2, 131–152. [Google Scholar] [CrossRef]
  56. Baker, M. De novo genome assembly: What every biologist should know. Nat. Method 2012, 9, 333. [Google Scholar] [CrossRef]
  57. Oxford Nanopore Technologies. Learn About Minion. Available online: (accessed on 23 May 2016).
  58. LaFramboise, T. Single nucleotide polymorphism arrays: A decade of biological, computational and technological advances. Nucleic Acids Res. 2009, 37, 4181–4193. [Google Scholar] [CrossRef] [PubMed]
  59. Eisenstein, M. Big data: The power of petabytes. Nature 2015, 527, S2–S4. [Google Scholar] [CrossRef] [PubMed]
  60. Ethics and Genetics Committee. Ethics and Genetics Report 2013—A Shift in Privacy Law and the Attendant Risks; Ethics and Genetics: Glascow, Scotland, 2013. [Google Scholar]
  61. The U-PGx Consortium. Ubiquitous Pharmacogenomics. Available online: (accessed 23 May 2016).
  62. Mitropoulos, K.; Al Jaibeji, H.; Forero, D.A.; Laissue, P.; Wonkam, A.; Lopez-Correa, C.; Mohamed, Z.; Chantratita, W.; Lee, M.T.; Llerena, A.; et al. Success stories in genomic medicine from resource-limited countries. Hum. Genomics 2015, 9, 11. [Google Scholar] [CrossRef] [PubMed][Green Version]
  63. Asian Network For Pharmacogenomics Research. Overview. Available online: (accessed on 23 May 2016).
  64. Rotimi, C.; Abayomi, A.; Abimiku, A.; Adabayeri, V.M.; Adebamowo, C.; Adebiyi, E.; Ademola, A.D.; Adeyemo, A.; Adu, D.; Affolabi, D.; et al. Research capacity. Enabling the genomic revolution in Africa. Science 2014, 344, 1346–1348. [Google Scholar] [PubMed]
  65. Dunnenberger, H.M.; Crews, K.R.; Hoffman, J.M.; Caudle, K.E.; Broeckel, U.; Howard, S.C.; Hunkler, R.J.; Klein, T.E.; Evans, W.E.; Relling, M.V. Preemptive clinical pharmacogenetics implementation: Current programs in five US medical centers. Annu. Rev. Pharmacol. Toxicol. 2015, 55, 89–106. [Google Scholar] [CrossRef] [PubMed]
Table 1. Genes considered to have high levels of evidence for effects on drug responses according to PharmGKB and CPIC Gene and Drug Guidelines.
Table 1. Genes considered to have high levels of evidence for effects on drug responses according to PharmGKB and CPIC Gene and Drug Guidelines.
GenesDrugsCPICPharmGKBCPIC Publications
HLA-BAbacavir; allopurinol; phenytoin; carbamazepineA1A[14,15,16,17]
CYP2C19Amitriptyline; clopidogrel; imipramine *; trimipramine *; citalopram; escitalopramA1A[18,19,20]
CYP2D6Amitriptyline; codeine; desipramine; doxepin; fluvoxamine; imipramine; nortriptyline; paroxetine; trimipramineA1A[18,20,21]
TPMTAzathioprine; mercaptopurine; thioguanineA1A[23,24]
DPYDCapecitabine;fluorouracil; tegafurA1A[25]
CYP2C9Warfarin; phenytoin **A1A[16,27]
IFNL3Peginterferon alfa-2a; peginterferon alfa-2b; ribavirin; telaprevirA1A[32]
* PharmGKB level of evidence 2A; ** PharmGKB level of evidence 1B.
Back to TopTop