FormalPara Key Points

The increased accessibility of genomics technologies has significantly aided the advancement of our understanding of the pharmacogenetics of adverse drug reactions (ADRs). Next-generation sequencing methodologies will help to continue this advancement.

Many examples now exist of pharmacogenetic markers of ADRs. However, only a very small number have made the translation from discovery to clinical practice.

Widespread uptake of pharmacogenetic typing into healthcare practice still faces significant hurdles, including determining the nature of evidence that will be required, the availability and cost effectiveness of pre-prescription genotyping, and better education of the workforce.

1 Introduction

Pharmacogenomics is the term used to describe the role of the human genome in drug response. Knowledge of the relevance of an individual’s genetic make-up in their response to drugs was recognized as long ago as 1959, just 6 years after the structure of DNA was discovered [1], when the term ‘pharmacogenetics’ was first used by Vogel [2]. Even before this, the condition Favism (later shown to be due to G6PD deficiency) had been recognized since ancient Greek times. However, it is only really since the completion of the first full human genome sequence in 2001 [3] that research in this area has expanded (Fig. 1). Pharmacogenomics represents one component of the overall field of personalized medicine, an area of medicine where the ambition is to tailor procedures and therapeutics to an individual’s make-up (disease sub-type, genetic, environmental, and clinical), maximizing the success and minimizing any potential adverse effects. Translating the discovery of pharmacogenetic biomarkers into interventions at the bedside has been less successful than at first hoped, with a few notable exceptions, and this review will aim to explore why this is and what could be done to improve this process [4, 5].

Fig. 1
figure 1

Number of publications, in PubMed, in pharmacogenetics or pharmacogenomics over the years. For 2015 (open square), data to 29 September 2015 have been extrapolated pro rata to forecast the number of publications for the year

2 Variability in Drug Response

The efficacy of drugs varies widely, with only a proportion (which varies according to disease and drug) of patients responding. Adverse drug reactions (ADRs) account for between 6.5 % (hospital admissions) and 25 % (primary care) of attendances seeking medical help, and therefore are a significant burden on healthcare services [6]. ADRs cost in excess of £1 billion every year in the UK, with equivalent figures in other countries [7], so there is a clear benefit to understanding their causes (including genetic) and preventing them. Furthermore, between 1990 and 2013, 43 drugs were withdrawn from the market due to severe ADRs [8].

ADRs mimic naturally occurring disease, can affect any bodily system, can be of any severity, and, in some cases, can lead to the death of a patient. Some reactions can affect multiple systems, for example hypersensitivity reactions, which are immunologically mediated. Table 1 lists some of the ADRs by bodily system.

Table 1 Recognized adverse drug reactions by system, with exemplar causal drugs and indications

According to the classification proposed by Rawlins and Thompson [9], ADRs themselves can be dose dependent and predictable (Type A), or apparently dose independent, unpredictable, and unrelated to the known pharmacology of the drug (Type B). Other classifications have also been proposed, including the DoTS (Dose, Time and Susceptibility) classification, where the adverse reaction is classified by dose relatedness, timing and individual susceptibility [6, 10].

3 Identifying Genomic Biomarkers

The International HapMap Project [11] was the first platform where genome-wide data on common single nucleotide polymorphisms (SNPs) across different populations, that might cause different drug responses, could be explored. This project started with the aim of determining common patterns of DNA sequence variation and then making this information freely available. The aim of the HapMap Project was to identify common sequence variants that predisposed to diseases, identifying new drug targets and markers of variability in drug response. We differ in approximately 0.1 % of our human genomes, which still equates to 3 million bases. These variants can be used for focused candidate gene, linkage-based and genome-wide association studies (GWAS) [11]. The latter is now becoming commonplace, but the vast majority of genome-wide studies have focused on complex diseases, with only a small percentage focusing on drug response [12]. A particular issue with GWAS is the need for large sample sizes when the effect size is relatively small, for example in complex diseases such as type 2 diabetes. However, with drug-response phenotypes, much smaller sample sizes have been needed to produce significant results because of larger effect sizes [13]. This is fortuitous given that large sample sizes are rarely available for pharmacogenomic phenotypes, especially for ADRs. Additionally, the large effect size has allowed the deployment of these genetic variants into clinical practice for prediction of ADRs.

While the HapMap Project provided data on common variants, the 1000 Genomes Project [14] has expanded and enriched this further by supplying data on rare genetic variants. Next-generation sequencing, explored further later, will only add to this bank of data. This is already being applied to drug-metabolizing enzyme genes; for example, sequence-based analysis of coding variants in 12 cytochrome P450 (CYP) genes that are responsible for the metabolism of 75 % of drugs [13] showed that between 7.6 and 11.7 % of individuals carry at least one potentially deleterious variant. Unfortunately, one of the recognized problems with current sequencing technologies for genotyping is that all platforms have systematic weaknesses and, as such, no gold standard for low- or high-depth sequencing exists [15].

3.1 Candidate Genes

Identifying candidate genes for selected disease processes by understanding the pathophysiology of that disease process, or by understanding the pharmacology of a drug, was possible even before the human genome was fully mapped [15]. The basis of a presumed variation is, for example, a known pathway of metabolism or a known action at a receptor. The gene encoding the enzyme or receptor can then be examined for polymorphisms that may produce variability in drug response. In most cases, a case–control study design has been used; for example, patients with and without ADRs have often been investigated in this way. These studies have focused on variants with known functional effects, but it is important to note that even apparently non-functional SNPs can be useful, as they may be in linkage disequilibrium to causal variants [16].

A number of examples of polymorphic genes exist, identified using candidate gene approaches that have proven useful in clinical practice. For example, individual variability in dose requirements for warfarin has been shown to be dependent on polymorphisms in CYP2C9 and VKORC1, which has now led to genotype-guided dosing strategies [17, 18]. Furthermore, thiopurine s-methyltransferase (TPMT) levels are now tested prior to the initiation of azathioprine; a relative or absolute deficiency of TPMT leads to the accumulation of thioguanines increasing the risk of severe bone marrow suppression [19]. The most widely studied genes using candidate strategies have been the CYP genes, where polymorphisms (either loss of function or gain of function) have been associated with ADRs. CYP2D6 was the first polymorphic phase I gene to be identified; it metabolizes 25 % of drugs [20], with 4.72–8.69 % of Caucasians being genotype-defined as poor metabolizers [21]. The reduced or absent enzyme activity leads to reduced clearance of the drug, with increased exposure and an increased risk of ADRs. An example is the β-blocker metoprolol, which can cause bradycardia in poor metabolizers [22]. Conversely, some individuals, up to 5.5 % of Northern Europeans, carry more than two copies of the CYP2D6 gene and are classified as ultra-rapid metabolizers [20]. This can also lead to ADRs with pro-drugs such as codeine. Recent work has shown that ultra-rapid metabolizers, particularly infants and children, may be at increased risk of respiratory depression [18]. This has resulted in a change in the drug label by many regulatory agencies worldwide.

Although candidate gene strategies have led to some important findings, the overall success in clinical implementation has been low. There are many reasons for this, including that investigators often used small sample sizes and, in many cases, it has not been possible to replicate the associations in subsequent studies, and the genotyping and phenotyping strategies were poor. Another important limitation of a candidate gene approach is that it pre-supposes that we fully understand the mechanism by which a drug exerts beneficial and/or adverse effects, which is often not true, leading to incomplete assessment of the pathways involved in drug response.

3.2 Genome-Wide Association Studies

GWAS have been used to identify predisposing loci for differences in drug responses in an unbiased fashion, including inter-individual variability in the occurrence of ADRs. The advantage of GWAS over candidate gene studies is the ability to identify novel mechanisms that might cause variability in drug response between individuals, beyond the current known mechanisms of variability. GWAS uses linkage disequilibrium data to compare alleles at thousands of loci across a range of phenotypes. These SNPs then undergo regression analyses, based on the null hypothesis, to look for an association with a disease status or trait. This, the ‘discovery phase’ is then taken forward to the ‘replication phase’ whereby the SNPs with the most significant associations are retested in a new sample [23]. GWAS can identify specific gene loci, the effect of which can then be assessed in specific clinical trials [24]. One such example is the use of GWAS in assessing the variation in adenosine diphosphate (ADP)-associated platelet aggregation with clopidogrel [25].

GWAS have been designed to detect common variants, although ‘chips’ are now becoming available, which can also enable the detection of rare variants. Furthermore, imputation techniques can allow the prediction of the existence of rare variants, but variants that exist at a minor allele frequency of <1 % will require next-generation sequencing [24, 26]. Imputation techniques have also been used to infer which human leukocyte antigen (HLA) alleles are responsible for adverse reactions, but it is important that this is confirmed by using conventional techniques to confirm the predisposing HLA locus [2729].

The main issue with GWAS is the sample sizes currently available to researchers, particularly for rare ADRs or adverse reactions to drugs that are not widely used. International collaboration across research groups, such as that by the SEARCH (Study of the Effectiveness of Additional Reductions in Cholesterol and Homocysteine) collaborative in statin-induced myopathy [30] or the iSAEC (international Serious Adverse Events Consortium), is the best way to improve sample sizes to produce good replicable findings that may have real clinical impact [24]. When several GWAS for the same phenotype have been undertaken by different groups, meta-analysis can be undertaken using imputed genotypic data across different platforms, exploiting the linkage disequilibrium across different SNPs. The drawback of imputing data is the weaknesses of various imputation methods and the bias these introduce into any such derived data [3135]. However, like the advances in GWAS platforms and next-generation sequencing, statistical methods for interpreting genomic data are also advancing [31, 36].

3.3 Next-Generation Sequencing

Next-generation sequencing is an approach that can enable examination of the whole genome as well as targeted genes. It is a high-throughput parallel-sequencing approach that generates billions of short-sequence reads that are then aligned to a reference genome, enabling the investigator to identify and genotype sites of variation [37]. Like all of the technologies described here, the cost has fallen, and will continue to do so. This will enable individuals with rare or novel variants in a pharmacogene to be identified, where it may have been undetected previously using common variant analysis methodologies [38]. For instance, conventional screening assays identified 250 variants in 231 ADME (absorption, distribution, metabolism, and excretion)-related genes in an individual versus 17,733 variants using next-generation sequencing; 861 of these variants were thought to be functionally significant [38, 39]. One of the challenges of next-generation sequencing is the analysis of data, which requires computing and bioinformatics infrastructure and expertise [38]. As with GWAS, although imputation can be performed, for rare variants identified by next-generation sequencing, linkage disequilibrium is weak between low-frequency alleles, and should be used with caution in association studies [40, 41].

Since most clinicians will not be experts in next-generation sequencing technologies, it is important to have readily accessible sources of reliable information they can use to interpret genetic variants in their patients. However, a problem in this area is that more than 30 information websites exist on pharmacogenetics and pharmacogenomics [38]. As the data from next-generation sequencing go ‘public’, there is a clear need to condense and compress these data into fewer open-source areas, so they are as easy as possible to find for those wishing to apply them. Of course, information needs to be kept as open and public as possible to maximize the benefit [38]. In addition, it is important to have quality control on the data that are generated and stored on these information portals, and that are used for clinical decision making—the latter is important, as lack of quality control will lead to incorrect decisions being made on therapy, to the detriment of the patient.

3.4 Rare Variant Analysis

Whole-genome sequencing for rare variants is clinically important, as many rare variants of common diseases and Mendelian inherited disorders are inherited as highly penetrative rare variants genetically [37]. Of 7000 known Mendelian disorders, less than half have identified genetic variants [15]. Genome-wide sequencing, target-region sequencing, exome sequencing, and rare-variant genotyping arrays can all be used in the design of rare variant studies and have been accompanied by the development of novel statistical methods [37]. An area of potential for rare variant analysis is whether it explains a larger proportion of the variation in predisposition to a disease or to a drug reaction than explained by common genetic variants [42]. This may lead to a genetic profile where the person’s predisposition is due to both common and rare variants; this hypothesis has not been tested to any great extent in pharmacogenomics and will require well-designed adequately powered studies.

3.5 Micro-RNAs

The assumption that large parts of our DNA are non-functional DNA is now known to be incorrect. In particular, certain micro-RNA (miRNA) molecules have been found to increase with certain ADRs such as drug-induced liver injury and skin reactions. In these situations, the miRNA species may both act as early biomarkers for the disease process and provide important insights into the pathogenesis of these reactions [13]. Two recent examples are worth a mention.

Researchers in Japan have independently identified elevation of skin and serum miR-18a-5p [43] and serum miR-124 [44], respectively, in cases of toxic epidermal necrolysis, linking the miRNAs to keratinocyte apoptosis. The clinical implication is unclear, although the miRNA could have potential use as a biomarker, but this would need to be further investigated in clinical studies. Furthermore, investigating the regulatory mechanisms of keratinocyte apoptosis induced by the miRNA may lead to novel interventional strategies [43].

In paracetamol poisoning, an early rise in serum miR-122 is an indicator of liver injury, which may be applicable to liver damage caused by other drugs. Further work to evaluate the role of the miRNA, and other novel protein biomarkers, in comparison with conventional liver function tests is required to assess the prognostic value provided by these novel biomarkers in clinical settings [45, 46].

As well as circulating levels, the genetic variation of both miRNAs and miRNA regulatory binding sequences may provide a further source of inter-individual variability. Specific HLA-C allelotypes have been demonstrated in HIV patients to contain polymorphisms within the binding site of miR148a, which mediates HLA-C expression [47]. Thus, this represents an example of a genetic polymorphism that affects miRNA regulation of gene expression, where the gene itself has been previously associated with a serious ADR, namely nevirapine-induced Stevens–Johnson syndrome [48]. Although no functional association between miRNA levels and immune-mediated ADRs has been described so far, it is entirely plausible and requires further investigation.

Polymorphisms have recently been described in the MIR133 gene that codes for miR133, which is constitutively co-expressed in hepatocytes with VKORC1 (the pharmacological target of warfarin). Polymorphisms in the MIR133A2 gene have been associated with warfarin dosing [49], although this finding need to be independently replicated.

4 Genetic Basis of Adverse Drug Reactions

Many ADRs have a genetic basis, but the overall contribution varies (but remains poorly defined). In general, variation in predisposition can be due to pharmacokinetic or pharmacodynamic factors, or a combination of both. An example of a pharmacokinetically-mediated ADR is with codeine where CYP2D6 ultra-rapid metabolizers can form higher quantities of morphine, predisposing them to respiratory depression [50]. The variation in HLA genes predisposes to many immune-mediated ADRs (covered in a recent review article [51]), and represents examples of pharmacodynamic variation. This is an area where significant advances have been made: HLA-B*57:01 genotyping is now used in most countries prior to the prescription of abacavir, an antiretroviral associated with hypersensitivity. A recent systematic review showed that the odds ratio of developing abacavir hypersensitivity in HLA-B*57:01 positive patients was 32, 177, or 859, depending on whether broad clinical criteria, strict clinical criteria, or patch testing, respectively, were used to categorize patients. Importantly, the effect of HLA-B*57:01 is seen across Whites, Blacks, and Hispanics [52]. This contrasts with the association of HLA-B*15:02 and carbamazepine-induced SJS/toxic epidermal necrolysis (TEN), which is only seen in some south-east Asian populations where the background prevalence of this HLA allele is high [8]. Non-HLA genomic biomarkers have also been identified as pharmacodynamic determinants of ADRs. For example, rare mutations in potassium channel genes have been implicated in torsade de pointes with some drugs [53]. More recently, new pharmacodynamic genetic variants have been identified for cisplatin-induced deafness [54] and vincristine-induced peripheral neuropathy [54].

An example of a drug in which both pharmacokinetic and pharmacodynamic variation increases the risk of ADRs is warfarin, where variability in VKORC1 (a pharmacodynamic gene, the protein product of which is inhibited by warfarin) and CYP2C9 (which is responsible for the metabolism of warfarin) not only determines the variation in daily dose requirement but also increases the risk of bleeding [55]. A multitude of studies have consistently shown an association between the response to warfarin (including daily dose) and polymorphisms in these two genes, making warfarin dosing-associated genetic polymorphisms one of the most highly replication phenotype–genotype associations [55]. However, two recent large randomized controlled trials undertaken in Europe (EU-PACT) [17] and in the USA (COAG) [56] came to different conclusions regarding the utility of genotype-guided dosing. While the EU-PACT trial showed a 7 % improvement in the time in therapeutic range using genotype-guided dosing, the COAG trial showed no difference between a genetic and a clinical algorithm. There are many reasons for the differences in the outcome of these two trials, including the types of algorithms used, ethnic heterogeneity in COAG, and the comparator algorithm used—these have been extensively discussed in a recent review [55]. The importance of genotype in determining the risk of bleeding has been shown more recently in an analysis of the warfarin arm of the ENGAGE AF-TIMI 48 trial [57].

This review does not examine in depth established modes of pharmacogenetic variability via known associations, since a significant volume of literature already exists (see Table 2). Based on this, the action column of the table lists the suggested action in individuals who are found to show the variation identified, where such advice has been issued. It aims instead to examine what progress has been made in recent years, and what might be hindering progress in this area, particularly from an application perspective.

Table 2 Some examples of known genetic associations with adverse drug reactionsa

5 Challenges for Clinical Implementation

As Table 2 demonstrates, robust genetic tests are available for a growing number of ADRs, though clearly much work is to be done in identifying genetic associations for many other ADRs. Even fewer have been implemented into clinical practice: two of the most prominent examples are abacavir and HLA-B*57:01 and carbamazepine and HLA-B*15:02, which have demonstrated clinical utility [58, 59] and cost effectiveness of pre-prescription testing [60, 61]. Genetic variants in pharmacokinetic genes have fared even worse in terms of clinical implementation, although many associations that are convincing in terms of both biological plausibility and replicability have been described. Specific examples include TMPT testing for prevention of bone-marrow suppression from azathioprine/mercaptopurine therapy [19, 62], CYP2C19 for clopidogrel/prasugrel anti-platelet treatment stratification [63], and UGT1A1*28 for prevention of irinotecan-induced neutropenia [64]. The lack of translation of pharmacogenomic associations into clinical practice thus represents a significant challenge for which there are many reasons [16, 65].

Given the rarity of many ADRs and the need for large sample sizes for discovery and independent cohorts for replication, it is often not possible for individual research groups to recruit the relevant number of cases and controls. Collaboration is therefore crucial, and this is beginning to happen through the formation of international consortia such as iSAEC [66], DILIGEN [67], EUDRAGENE [68] and RegiSCAR [69]. Findings such as the association between HLA-B*57:01 and flucloxacillin hepatotoxicity (an ADR with an incidence of 8.5/100,000 users) [67], demonstrate the power and potential of ‘combining forces’. No doubt, further robust ADR pharmacogenomic associations will be yielded using this approach in the future.

Some of the lack of reproducibility of genetic associations between independent research groups can be related to inconsistencies in the ADR phenotype definition. Initiatives have been established with the aim of standardizing phenotypes of key ADRs for application to pharmacogenetic studies with the aim of enhancing reproducibility of findings. Standardized phenotypes have been proposed for a range of ADRs, including drug-induced liver injury [70], skin injury [71], torsade de pointes [72], statin-induced myopathy [73], and angiotensin-converting enzyme inhibitor/angiotensin receptor blocker-induced angioedema [74]. It is hoped that adoption of these phenotypes by the wider pharmacogenetics field may lead to more reproducible findings and provide the greater weight of evidence required for clinical translation of ADR genetic associations.

A list of pharmacogenomic associations that have been identified and validated is regularly updated by the US FDA with label warnings to help prescribers [75]. Unfortunately, current prescribers have little awareness of this guidance and therefore there is little evidence that prescribing practice has yet changed [76, 77]. Furthermore, the pharmacogenetics information contained within drug labels can only be of limited scope. For example, the drug label for codeine suggests that it should be avoided in CYP2D6 ultra-rapid metabolizers due to the risk of morphine toxicity, and expert consensus supports this [78]. However, what is not clear in the drug label is exactly how the ‘ultra-rapid’ metabolizer phenotype should be derived from genotype data. Many laboratory tests exist that can determine this, but currently no standardized set of genetic markers is used. As such, there is a need to develop guidance that sits alongside the label that advises clinicians on the tests that can be undertaken, their interpretation, and any action that needs to be carried out [79]. This is now being realized through the publication of guidelines. For example, the Clinical Pharmacogenetics Implementation Consortium has published a significant number of guidelines for genotype-derived drug administration that are aimed at reducing toxicity. These have included CYP2D6 and codeine toxicity [78], TPMT and azathioprine-induced bone marrow toxicity [80], and SLCO1B1 and simvastatin-induced myopathy [81]. Similarly, the Royal Dutch Association for the Advancement of Pharmacy has set up a pharmacogenomics working group that has produced similar guidelines [82]. This should of course also be accompanied by further education and training for prescribers and healthcare managers who make budgetary decisions [83]. We know there is a lot of wastage of medicines in real-world clinical practice, and there is a need for further research to determine in a holistic manner whether better drug choice and drug doses through the use of pharmacogenomics would be cost effective. Indeed, a recent study suggested that one-time pharmacogenetic testing for preventing ADRs may be cost effective over a patient’s lifetime [84].

6 Pre-Emptive Genetic Testing

A concept currently gaining momentum is that of pre-emptively genotyping patients for key pharmacogenetic polymorphisms and providing this information within e-health records to prescribing physicians (across multiple medical specialties). A warning can then be flagged up to the prescriber, indicating that the genotype of the patient may have implications for the efficacy or toxicity of the drug they are prescribing. However, this concept relies on the provision of a strong, easily interpretable informatics component able to provide specific decision tools for drug/dose adjustment based on genotype. At present, within most healthcare settings, where patients are not routinely genotyped, it is difficult to apply any guidance, however benevolently intended. A number of feasibility studies for implementing pre-emptive testing are currently ongoing [80].

Despite the apparent promise of pre-emptive testing in averting ADRs, the testing is not without its critics. Hard data on the clinical benefits of testing pre-emptively to justify the cost are lacking, and data on the comparative effectiveness of testing for genotypes is needed for those making financial decisions to justify funding such testing [85, 86]. In the EU, the clinical effectiveness of pre-emptive genetic testing will be tested over the coming years through the Ubiquitous Pharmacogenomics (U-PGx) Consortium (http://upgx.eu/).

7 Developing Areas

7.1 Point of Care (POC) Testing

One of the hurdles of testing for genotypes has been the cost and the timelines required to obtain results. Conventional laboratory genetic testing can take time and often this is a precious commodity when drug administration is required. Point-of-care (POC) testing, in principle removes the time issue and could potentially facilitate a much higher uptake of pre-emptive testing before prescribing. To date, implementation of POC testing for pharmacogenetics has largely been used for the optimization of the dose of the cardiovascular drugs warfarin and clopidogrel. Two recent trials, EU-PACT for warfarin [17] and RAPID-GENE for clopidogrel [87, 88], have used tests that produce a result in less than 120 minutes. Neither were strictly POC, as laboratories were used to analyze the samples, but both demonstrated the benefit of utilizing rapid portable genetic-testing devices prior to prescribing a drug. A number of FDA-approved kits are already available for POC testing for CYP2C9, VKORC1, and CYP2C19, including those used in the trials [88]. The future application of microfluidic technology [89] is likely to improve the portability of testing kits, and ease of use should improve rapidly, making true POC testing a real possibility. However, there are still some definite barriers to immediate expansion into ADR pharmacogenetics. Within the USA, clinics wishing to use POC testing will either require a CLIA (Clinical Laboratory Improvement Amendments) laboratory in which to do the testing or a waiver to test outside of this setting. Even with this waiver, FDA approval of the testing kit used is needed. In addition, current POC technologies are unable to type for HLA alleles, implicated in many ADRs, due to their very polymorphic loci, so some improvement is required in the basic technology before POC testing for polymorphisms in HLA is possible [88].

7.2 Companion Diagnostics

Drug-companion diagnostic combinations have traditionally been the domain of the field of oncology, with excellent examples such as trastuzumab in breast cancer [90] and imatinib in gastrointestinal stromal tumors [91] demonstrating the increased efficacy of therapies in patients stratified by tumor molecular biology. Indeed, oncology drugs still comprise >40 % of all marketed drug–diagnostic combination products [92]. However, some notable non-cancer drug diagnostic combinations are in development for the optimization of treatment with antiplatelet drugs (clopidogrel or ticagrelor) for coronary artery disease/acute coronary syndrome [87]. There are currently no examples of stratifying companion diagnostics for the purpose of pre-emptive safety screening. However, it is conceivable that future drugs may be accompanied by such a test kit in order to moderate a drug safety profile.

8 Conclusions

Genetic factors predispose to ADRs to a variable extent. Many convincing and biologically plausible genetic associations with ADRs have been described, but very few have been implemented. There are many reasons for this, some of which have been covered in this review. Certainly, implementation of genetic testing will be helped by the continuous advances in genotyping technology, which is becoming increasingly accessible, faster, cheaper, and easier to use. However, this needs to be accompanied by generation of a relevant and robust evidence base that shows that genotyping impacts clinically on patient outcomes. This is needed irrespective of whether the genotyping is reactive or pre-emptive, as this is the only way that the clinical community will embrace this approach. Another major challenge is that the knowledge of the reach and potential of personalized medicine is still limited in all healthcare settings. Thus, educating those who deliver healthcare in the relevance of pharmacogenomics and the benefits for their patients is an important step. This should be accompanied by careful consideration of the cost effectiveness of testing, and acceptance by payers of the relevance of genetic testing. It is interesting to note that the increasing use of genetic testing prior to the prescription of clopidogrel in the USA has resulted in the test being funded via Medicare [93]. Without these developments, introduction of regular testing with rapidly available results will be challenging [85].