An essential feature of therapeutics is to get the right dose of the right drug to the right patient. Clinical pharmacology, as a study of drug action in humans, underpins this objective. Arguably, every drug given in enough quantity is a poison. Therefore, the dose of a drug is crucial not only to achieving its therapeutic effects, but also in preserving patient safety. The activities involved in the selection and establishment of a ‘safe dose’ run throughout the clinical development of a drug. In this article we consider the role of the various disciplines of clinical pharmacology (table I), and discuss how these are executed during the birth and life of a drug to optimise drug safety.

Table I
figure Tab1

Some important clinical pharmacology terms and concepts

The quantification of drug action in humans is fundamental to clinical pharmacology, and the relative ease with which cardiovascular function could be measured steered the field in that direction in the early days of clinical pharmacology.[1] Advances in other scientific fields permitted development of other indicators of clinical function, and their direct or indirect measurement. Today, advanced pharmacological action can be studied in unprecedented detail, from the use of positron emission tomography in neuropharmacological studies to profiling gene transcripts in response to drug administration. Further, progress in drug assay techniques such as high performance liquid chromatography, mass spectrometry and enzyme-linked radiological and immunological assays allow quantification of minute drug levels in biological fluids. Thus, the modern clinical pharmacologist has a potent arsenal of techniques to study the pharmacokinetic and pharmacodynamic properties of a drug.

The dose of the drug administered is a primary determinant of drug safety, and a major aim of clinical drug development is to establish a safe and efficacious range. While the classical method of clinical development has a proven track record, there are several important drawbacks. The dose range established by clinical trials is not always representative of the whole population. By design, pre-marketing clinical trial participants are selected by strict inclusion/exclusion criteria and, consequently, a number of subpopulations are not represented by the clinical trial group. These include children, the elderly and women, particularly those of childbearing age. The selection criteria also exclude (for good reason) individuals with co-morbidities (other illnesses or poor nutritional status); individuals taking concomitant medication are also not usually represented (figure 1). The finite duration of the clinical trial makes it difficult to ascertain the long-term effects of the drug. Another major limitation is the size of the study group. At the time of launch, most drugs would have had limited exposure to a select group of approximately 1500–3000 people. Although this allows detection of the common adverse events, there is not adequate power to detect all adverse reactions, especially those that are uncommon.

Fig. 1
figure 1

Applicability of dose ranges established during clinical trials as compared with that by postmarketing trials. The use of pharmacogenetics may allow a more individualised approach to dose selection in the future.

Superimposed on these limitations is the genetically determined variation in drug response. Progress in pharmacogenetics has revealed that variation in nucleotide sequence is common, drug response is a complex phenomenon (multigenic and multifactorial) and that there is ethnic variation in drug response. This is an increasingly important consideration in the current era of pharmaceutical globalisation.

Therefore, a ‘one size fits all’ approach to dose selection during drug development is marred with important pitfalls. All phases of drug development should lead to safer therapeutics but, importantly, lessons learnt from the pre-marketing stages should inform the development and use of postmarketing surveillance (PMS) activities in order to preserve the safety of the public and further promote the safe use of drugs.

1. Brief Outline of Pre-Marketing Drug Development

1.1 Preclinical Phase

The development of a drug begins when a compound that has a favourable effect on a biological system is identified. The process of identification of a drug lies in the realms of drug design and discovery. The technological aspects of drug design and discovery are beyond the remit of this review. Suffice to mention, there are processes designed to screen out, or identify, molecules that may have potential adverse effects at this early stage,[25] for example, by screening compounds that have functional groups likely to form electrophiles. Thus, specific characteristics or chemical groups with known tendencies for toxicity or adverse effects may be eliminated at an early stage. Preclinical pharmacological tests are then performed on the drug to establish the pharmacokinetics, pharmacodynamics and toxicological profile of the drug using a range of in vitro and in vivo techniques.

1.2 Allometric Scaling in Phase I Dose Selection

Allometric scaling is a method used to predict human pharmacokinetic data from in vivo preclinical pharmacokinetic data.[68] Pharmacokinetic parameters such as total body clearance and volume of distribution (Vd) of unbound drug can be established in vivo preclinically and extrapolated to obtain the human values,[7] allowing investigators to initiate studies with an appropriate dose range in phase I studies.[6] Combining allometric scaling with in vitro metabolism data has improved this method for prediction of compounds prone to species-specific metabolism.[9,10] Using these methods, clinical pharmacologists can better predict dose ranges in humans with reduced risk of toxicity to human trial candidates.

1.3 Clinical Trials

Clinical trials have traditionally been divided into phases I to IV, the last phase occurring after the marketing of the drug (postmarketing studies or PMS).

In phase I clinical trials (or ‘first in man’), the human tolerability and pharmacokinetics are established, along with dose-finding experiments to determine therapeutic and toxic doses. This is also the first opportunity to investigate the clinical pharmacology of the drug. Phase I trials are normally carried out in healthy young male volunteers. Phase II studies are the ‘first in patient’ trials. The potential benefit of the drug to patients, or first proof of clinical efficacy, is evaluated in a small group of patients. The pharmacokinetic and pharmacodynamic properties of the drug may also be established in a disease state. Phase III studies rigorously test the therapeutic potential and safety of the drug in a larger patient group.[11,12] Safety evaluation at this stage is still limited by the relatively small size of the study group, compared with the expected postmarketing exposure. The regulatory authorities scrutinise all discovery and developmental data to determine the safety, efficacy and quality of the drug. If successful, the company is issued a product licence and authority to market the drug. Postmarketing activities provide a mechanism of monitoring drug use and effectiveness in the full spectrum of patients, including those not normally included in the pre-marketing clinical trials.

Appreciably, the delineations placed between the clinical trial stages are academic, as these clinical trial phases are not necessarily mutually exclusive, and may not even progress in the defined order.[13] For example, ‘first in man’ studies of a cytotoxic drug cannot be ethically justified in healthy volunteers, and the maximum tolerated dose is established in a small number of patients. Unexpected findings in a phase II trial may require another phase I trial for further clinical pharmacokinetic data.

2. Pharmacokinetic-Pharmacodynamic (PK-PD) Models in Dose Selection

2.1 Overview of PK-PD Models

Pharmacokinetic-pharmacodynamic (PK-PD) analysis links pharmacokinetics and pharmacodynamics by mathematical modelling to enable the time course of drug activity to be characterised.[14] Mechanism-based models can be used to capture the non-linearity of the dose-response relationship and are well suited to characterise processes such as the contribution of active metabolites, drug-drug or drug-disease interactions and the development of tolerance.[15] Pharmacokinetic models are most often based on compartmental analysis whereby drug concentration (e.g. in plasma) is a function of the dose, Vd, bioavailable fraction (if administered orally), time and rate constants that describe the processes of absorption, distribution and elimination. In instances where the pharmacological effect is directly related to measured plasma concentrations, with no evidence of hysteresis, a pharmacodynamic model with effect being a function of concentration is often used. Common pharmacodynamic models include the linear model, the log-linear model, the hyperbolic Emax (or Imax) model and the sigmoidal (Hill) model.[14] Variations in these models have been successfully employed to describe the actions of irreversibly acting drugs, characterise the formation of active metabolites, account for synergistic or antagonistic drug-drug interactions, and to describe the development of tolerance.

2.2 Population PK-PD Modelling

Data obtained in phases I and II of clinical drug development are often used to parameterise PK-PD models. Parameter estimates may be obtained using the population approach, which normally requires the use of non-linear mixed effect regression models.[16,17] In such a model, the between- and within-variability in parameter estimates are taken into account, with variability among patients tested for attribution with explanatory covariates such as age, gender and pharmacogenetic factors. This allows for better prediction of dose and/or response in individuals receiving the drug in later phases of development. Population PK-PD models allow simulations to be conducted that may be used to forecast drug response under different scenarios, such as with dose adjustments, non-compliance or regimen changes. This has important applications in the design of subsequent clinical trials; hence, the need for clinical trial simulation.[18,19]

Besides the primary focus on modelling pharmacodynamic (efficacy) responses, PK-PD modelling is also used to characterise and predict dose-related adverse effects. If data are available, a possible application of modelling is to define the therapeutic index and devise dose administration regimens that maximise efficacy whilst minimising toxicity. For instance, the clinical and adverse effects of an oral anticancer agent in development was simulated, by considering the drug action on tumour growth using a pharmacodynamic model, and the drug action on healthy skin tissue (unwanted side effect) via a standard indirect PK-PD model. Different dose administration regimens (intermittent versus continuous administration) were simulated in order to assess the benefit-risk ratios.[20] Having also made allowance for dose adaptation, that is, algorithms to decrease dosage in the event of adverse effects, the model predicted that, whilst comparable efficacy would be expected, continuous oral anticancer treatment is likely to cause greater adverse effects in some individuals as compared with intermittent drug administration. Dose reduction in affected patients was predicted to reduce toxicity in the majority of individuals.[20]

A modelling approach has also been used to determine the therapeutic index[21] and optimal dose administration regimen[22] of oxybutynin, with the aim of maximising efficacy (reduction in weekly episodes of urge urinary incontinence) whilst minimising toxicity (dry mouth). Differences in therapeutic indices were evident for the two formulations of oxybutynin, although the optimal daily dose, determined by simulation, was the same.

An improved understanding of pharmacokinetic and pharmacodynamic processes and application of appropriate statistical techniques has shown that an important role exists for population PK-PD modelling in calculating optimal dose administration regimens and the design of clinical trials. Increased confidence in in silico analyses may only be gained, however, when more trials are conducted to confirm their results, or at least when the outcomes of decisions based on modelling are compared with those based on current practices.

3. Drug Safety Considerations in Special Groups

Certain populations require special consideration during clinical development. These are populations that have either been excluded or under-represented in pre-marketing clinical trials because of their different states of physiology or differences in organ functions.

3.1 The Elderly

The elderly comprise about 18% of the population, but receive 45% of all prescriptions.[23] Additionally, 40% of all drugs used by the elderly are over-the-counter medicines, that is, not prescribed.[24] It has been projected that by 2020, the elderly will make up more than 16% and 26.2% of the US and Japanese population, respectively.[25] Therefore, although the size of the elderly population and their drug usage is not insignificant, this is not reflected in clinical trials. Differences in physiology in the elderly, largely due to reduced organ function, results in their exclusion from the trials during drug development.[26]

As a general principle, drugs should be studied in all age groups for which they will have significant utility.[27] This should obviously include the elderly. Guidelines for studies in the elderly have been drawn up and reiterate that the participants of clinical trials should be reasonably representative of the population that will be treated by the drug.[28]

The use of drugs in the elderly requires special consideration due to the higher frequency of underlying disease and concomitant drug therapy and, therefore, the increased risk of drug interactions. Renal and hepatic impairment are common in the elderly, and the resulting pharmacokinetic changes should be considered during drug development, especially in diseases associated with advancing age. The International Conference on Harmonisation (ICH) ‘Studies in Support of Special Populations’ document recommends the use of pharmacokinetic studies, using one of two approaches: formal pharmacokinetic or pharmacokinetic screening.[28] In pharmacokinetic screening, a small group of geriatric volunteers and younger patients are selected, and pharmacokinetic value of the drug established. Any differences between the pharmacokinetic profiles attributable to age differences can then be investigated further. By using the screening approach, elderly patients included in a phase III study are screened for plasma levels of the drug at steady state. The pharmacokinetic guidelines will also reveal any effects of renal or hepatic impairment, which clearly can also occur in the younger age groups.

The preponderance of polypharmacy in the elderly increases their susceptibility to drug-drug interactions. Thus, it is important to perform studies on drugs that have narrow therapeutic indices and are likely to be used in the elderly (e.g. digoxin). Likewise, drugs that are extensively metabolised by cytochrome P450 (CYP) isoforms, and those likely to be used with the trial drug, should all be investigated for pharmacokinetic and pharmacodynamic interactions.

3.2 Children

The paediatric population represents a spectrum of different physiologies and should not be regarded merely as smaller versions of adults. Drug action in the different paediatric age classifications[29] may display important differences in metabolism and disposition, which may lead to marked differences in clinical response.

A newborn infant has a neutral stomach pH, and this gradually reduces to 2–3 within the first few hours after birth, followed by an increase to 6–7 after 24–48 hours. Adult pH levels are established between the ages of 2 and 3 years old.[3032] These variations are important for the absorption and activity of pH-sensitive drugs.

The differences between neonate and adult hepatic enzymes are an important metabolic consideration. For example, in the fetal human liver, the microsomal total CYP is approximately one-third of the adult value,[33] although certain isoforms may be more abundant in the fetal liver. For instance, CYP3A7 is very active in the fetal liver, its activity being maximal during the first week after birth, and then steadily decreasing to reach the low levels found in adult liver. Conversely, the activity of CYP3A4 is very low in the fetus, reaching 30–40% of the adult activity after 1 month.[34] Drugs metabolised by CYP3A4 include cisapride. Indeed, an in vitro study showed that liver microsomes from fetuses and neonates aged less than 1 week lacked CYP3A4, and showed no detectable metabolism of cisapride.[35] This observation was later corroborated by the results of a clinical investigation in neonates and young infants. A more rapid decline in cisapride concentrations was noted in the oldest, most mature subjects.[36] Thus, the developmental status of metabolising enzymes and the use of their substrates is an important safety consideration in paediatric therapeutics. Renal clearance also varies in children; for example, it has been shown that renal clearance of imipenem in children (2–11 years old) was 1.95-fold greater than the estimated creatinine clearance, in contrast with adults.[37] Another study reported that the pharmacokinetics of imipenem-cilastatin in neonates resembled those observed in adults with moderate to severe renal insufficiency.[38] Hence, the selection of doses in children and neonates requires a consideration of pharmacokinetic parameters such as metabolic status and renal clearance.[39] These differences warrant specific studies in children to establish benefits and risks during drug development.

Currently, approximately 50–75% of drugs used in paediatric medicine have not been adequately studied to provide appropriate labelling information.[40] A recent prospective study of drugs administered to children in five European hospitals (including the UK),[41] showed that 46% of drugs administered were either unlicensed or used off-label. A recent review of the literature documenting the extent of drug use in the paediatric field outside the recommendations of the license revealed that between 10% and 70% of paediatric prescriptions were unlicensed or off-label.[42] This has been shown to increase the risk of adverse drug reactions (ADRs).[4345] For example, a prospective survey reported that off-label drug use was associated with ADRs, with a relative risk of 3.44.[43] An earlier study showed that ADRs were associated with 6% of unlicensed or off-label drugs, compared with 3.9% of licensed drugs.[44] Hence, there is a clear need for more paediatric clinical studies. However, poor economic incentives, difficulties in recruiting numbers and ethical constraints are some of the problems that hinder paediatric clinical trials.[46] In an article documenting the development of a drug for attention deficit hyperactivity disorder (ADHD), the authors describe useful ways of surmounting some of these barriers, such as conducting phase I studies in adult poor metaboliser volunteers to establish safe dose administration guidelines before commencing phase III paediatric clinical trials.[47]

That drug studies can be safely and ethically performed in children, regardless of age and disease state, has been shown by the work of the Paediatric Pharmacology Research Unit Network in the US.[48] Several strategies have been proposed or implemented to improve the availability of paediatric drug dosage and safety information,[46] including the US FDA Modernization Act of 1997, which gives the FDA authority to identify drugs that need paediatric testing.[49] The FDA issued the ‘Pediatric Rule’ in 1998 in order to ensure that drugs used for treating children are actually tested for paediatric use. The ‘Pediatric Rule’ requires drug manufacturers to study the efficacy and safety of their products in children and to devise paediatric formulations, or risk denial of FDA approval. The regulations apply to all drugs that may be used in children, even if an indication for use in children is not requested.[48,50] Companies that follow this procedure get 6 months extra exclusivity on the patent. However, this rule has recently been challenged in the US Courts, which ruled that the FDA did not have the authority to issue the ‘Pediatric Rule’ and has banned the FDA from enforcing it.[51] In its place the Pediatric Research Equity Act of 2003 was passed by the US congress.[52] Furthermore, the regulatory bodies of the EU, Japan and US have adopted an ICH document that outlines agreed guidelines for paediatric drug development, and approaches to safe, efficient and ethical study of drugs in the paediatric population.[29] Alternative methods of obtaining safety information have been suggested[46] and include prospective cohort studies as well as data mining with automated databases. These pharmacoepidemiological methods have an important role to play, and may complement the pharmacokinetic information obtained during the pre-marketing development of the drug.

3.3 The Female Population

Women constitute about 51% of the population in most countries, with 54% being of childbearing age (15–44 years). However, this population is frequently excluded from clinical trials during drug development.[53,54] The foremost reason for this is the paucity of data concerning the long-term effects of the drug on reproduction at the early phase; preclinical pharmacology may not be extensive enough at this stage to exclude the possibility of teratogenic or mutagenic effects. This is a wise precaution, given the potentially toxic doses that may be administered in the early stages of drug development.

The exclusion of women from clinical trials does have practical implications since there are gender differences in drug responses.[53,5558] For example, oral bioavailability of midazolam and verapamil has been shown to be significantly higher in women.[5860]

Generally, men possess greater muscle mass, intravascular volumes and body water, whereas women have a higher proportion of body fat. Consequently, the Vd of lipophilic drugs is greater in women. The clinical significance of these differences, however, is unclear and may vary from drug to drug.

The CYP isoform CYP3A4 is involved in the metabolism of approximately 50% of drugs. Faster hepatic clearance of a number of drugs in females, including ciclosporin (cyclosporin), diazepam and midazolam, has been attributed to sex differences in CYP3A4 activity.[58,6163] It may, therefore, be prudent to perform relevant studies investigating the effects of these differences on polypharmacy or comorbidity, in anticipation of potential risk factors that might push these observations into clinically significant problems. For example, the toxic effects for which mibefradil, a CYP3A4 substrate, was withdrawn displayed a higher female prevalence.[58] Some ADRs have been shown to be more prevalent in the female population, leading to calls for reduction in doses of some drugs[64] in order to improve the benefit-risk profile.

4. Clinical Pharmacology in Drug Interactions

Elucidation of a safe and efficacious dose range through rigourous clinical study is a necessary objective of clinical development. However, the role of the clinical pharmacologist does not end when this is achieved. A drug interaction can adversely skew the benefit-risk profile of a drug when administered at established safe doses. Consideration of drug interactions is, thus, a major part of patient safety during drug development.

A drug interaction is said to have occurred when the effects of one drug are changed by the coadministration of another.[65] This may be beneficial (as in the increased antihypertensive effect achieved with ACE inhibitors and diuretics) or harmful (such as increased bleeding with warfarin and an NSAID). In relation to drug safety, interactions with adverse clinical outcomes are the subject of this discussion. Drug interactions may be pharmaceutical (e.g. nitroglycerin binding to PVC intravenous tubing, or gastric complexation of tetracycline with calcium), pharmacokinetic (affecting the absorption, distribution, metabolism and/or excretion of the drug) and/or pharmacodynamic (e.g. propranolol and salbutamol [albuterol] competing for adrenergic receptors).

Drug interactions are rationally assessed by preclinical in vitro and in vivo PK-PD experiments.[66] Properties such as drug affinity for CYP isoforms, P-glycoprotein and other drug transporters, and even the biochemical pathways of the active principle, can be investigated by in vitro and in vivo techniques. Furthermore, the likelihood of interactions may also be predicted from the structure and physicochemical characteristics of the drug molecule and the pharmacological action of the drug itself or its class. Such an approach allows the risks of clinically important drug interactions to be estimated and considered before clinical trials are designed and conducted. Another important consideration is the potential interaction with other drugs likely to be coadministered, given the established therapy for the target disease and/or the lifestyle and demographics of the target population. Therefore, clinical pharmacology has an important role to play in identifying pharmacokinetic and pharmacodynamic interactions during the clinical phase of drug development. However, clearly not all potential drug-drug interactions can be studied during drug development, and the use of in silico techniques will, in the future, be extremely important in identifying potentially clinically important interactions and, thereby, providing warning of possible adverse consequences once the drug has been approved.

4.1 Pharmacokinetic Interactions

4.1.1 Absorption

Most drugs are administered via the oral route and are absorbed through the gastrointestinal mucosa. Changes to gastric pH or gut motility are the most common sources of interactions. For example, low gastric pH is optimal for ketoconazole dissolution. An increase in gastric pH by an antacid reduces the dissolution rate and subsequent absorption of the antifungal agent.[67] Direct physicochemical interactions such as binding of warfarin to cholestyramine may result in reduced warfarin bioavailability.[68]

4.1.2 Distribution

Displacement of drugs from serum proteins represents interactions that affect drug distribution and serum concentrations. However, the clinical consequences of these interactions are often not significant[69] because of compensatory mechanisms in the body, for example, increased renal excretion of the unbound drug. Increasing the unbound fraction of the drug in plasma by displacement from plasma proteins has a significant clinical effect only if the drug possesses a set of pharmacokinetic and pharmacodynamic properties that are seldom encountered in routine medical practice. These properties are extensive protein binding, non-restrictive clearance and non-oral administration.[69,70]

4.1.3 Metabolism

The biotransformation of drug molecules by hepatic enzymes is the main route of drug metabolism, and is also a major source of clinically important drug interactions. The inductive or inhibitory effect of a drug on the CYP enzymes may adversely elevate or reduce the serum levels of a coadministered drug for which it is a substrate. In terms of drug metabolism, the most important CYP isoforms are CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1 and CYP3A4.[65] More than 90% of drug oxidation can be attributed to these isoforms. An example of a classic metabolic interaction is that observed with ciclosporin, a CYP3A4 substrate, and clarithromycin, a rapid CYP3A4 inhibitor. Coadministration of the two drugs doubles the bioavailability of ciclosporin, with a corresponding 50% reduction in its oral clearance.[71] Conversely, the inductive effect of carbamazepine on CYP3A4 can reduce the levels of ciclosporin, with a subsequent risk of organ rejection.[72]

Genetically determined activity of metabolising enzymes is also an important consideration in metabolic drug interactions. For example, the metabolising activity of one of the CYP isoforms, CYP2D6, may be put into one of three categories: poor, extensive (normal) and ultra-rapid metabolisers. Approximately 7–10% of Caucasians, and <2% of Asian Americans and African Americans, are poor metabolisers of drugs that are substrates of this enzyme.[73] Coadministration of a CYP2D6 inhibitor with a CYP2D6 substrate may lead to differential effects on poor, extensive and ultra-rapid metabolisers.

4.1.4 Excretion

Most drugs and their metabolites are excreted via the kidneys. Generally, drugs are eliminated by glomerular filtration (free, unbound drug molecules <5kD) or active secretion into the tubular filtrate. Active and passive mechanisms also allow reabsorption of drug molecules. Drugs that alter any of these mechanisms have the potential to adversely increase or reduce plasma concentrations. The passive reabsorption mechanism is dependent on the ionisation status of the drug in the tubular filtrate. Only lipophilic non-ionised molecules are amenable to reabsorption across the lipid tubular membrane. At alkaline urine pH a larger percentage of weakly basic drug molecules are non-ionised, and will be reabsorbed. The converse is true for weakly acidic drugs. Therefore, drugs that change the pH of the tubular filtrate can affect the excretion rate of drugs. Indeed, sodium bicarbonate was used to increase phenobarbital (phenobarbitone) or aspirin (acetyl-salicylic acid) excretion in cases of overdose.[65]

Tubular secretion of drugs for excretion may be altered by the presence of a drug that competes for the same active transporter.[74] This interaction is often put to therapeutic use to increase serum concentrations of penicillins by coadministration with probenecid. The same mechanism of interaction accounts for methotrexate-induced pancytopenia[75,76] in patients concomitantly administered probenecid.

The pharmacological action of coadministered drugs may also lead to an indirect interaction with an adverse consequence. One of the functions of renal prostaglandins is to maintain renal perfusion through vasodilatation, especially when the effective arterial blood volume is compromised, for example, during therapeutic diuresis.[77] NSAIDs act by arresting prostaglandin production through cyclo-oxygenase (COX) enzyme inhibition. Consequently, coadministration of NSAIDs and ACE inhibitors or diuretics may lead to acute renal failure,[78,79] sometimes even with the more selective COX-2 inhibitors in certain high-risk groups.[77] The NSAID-induced reduction of renal perfusion can precipitate toxic levels of renally excreted drugs.

4.2 P-Glycoprotein

P-glycoprotein is an adenosine triphosphate (ATP)-binding cassette transporter that can limit cellular uptake of drugs by actively pumping them out of the cell. It is found (not exclusively) in the apical surface of intestinal epithelia and the renal tubular cells and luminal surfaces of the capillary endothelial cells of the brain.[80] The role of P-glycoprotein in drug interactions has been reviewed recently.[81] The transporter can limit the uptake of drugs from the blood into the brain, and from the gut intestinal lumen into the enterocytes. Evidence of its role in reducing the oral absorption of drugs has been backed by in vitro[82] and in vivo,[83] as well as clinical studies, examples of which are as follows.

  • Oral bioavailability of paclitaxel increased 10-fold when given with ciclosporin, a P-glyco-protein inhibitor.[84] Similar results were found with docetaxel.[85]

  • A correlation between low P-glycoprotein activity and high clinical levels of digoxin (a P-glycoprotein substrate) has also been reported.[86]

  • A daily dose of verapamil 240mg caused a 60–80% increase in plasma digoxin concentrations.[87]

P-glycoprotein is also inducible, which may limit the bioavailability of coadministered drugs that are substrates for P-glycoprotein. For example, coadministration of digoxin and rifampicin (rifampin) has been shown to reduce digoxin levels.[88] Similar problems were encountered with the herbal preparation St John’s Wort (hypericum), which acts as an inducer of P-glycoprotein (and CYP),[89] and lead to serious interactions with a number of drugs including immunosuppressants[90] and anti-HIV drugs.[91,92]

Given the localisation of P-glycoprotein in important tissues such as the blood-brain barrier, the liver and kidneys, its effect on the pharmacokinetics of new and established drugs remains an important safety consideration. This is certainly an important aspect that is being incorporated into drug development programmes. However, it is also important to note that P-glycoprotein is only one transporter, and over the last 5 years there has been an explosion in our knowledge of the many different influx and efflux transporters that exist in the human body. Many of these are now being shown to transport drugs and, even when they do not transport drugs, they may be prone to interference (inhibition or induction) by drugs and potentially lead to interactions. Hence, this aspect will have to be readily assembled into drug development and poses an important challenge to clinical pharmacology.

5. Pharmacogenetics and Drug Safety

The clinical effects of a drug are highly influenced by its pharmacokinetics, concentration at the target site and the specific interactions with the target molecule (i.e. pharmacodynamics). Differences in the genetic profile of individuals, or even groups of people, can lead to differences in the pharmacokinetics and pharmacodynamics, effecting a variation in drug response. Pharmacogenetics is concerned with the elucidation of the genetic origins of these variations, and their effect on drug therapy.[93,94] The role of pharmacogenetics in drug safety has been the subject of many reviews.[9598] Variations in gene sequences, or polymorphisms, are an important consideration in drug therapy. Single nucleotide polymorphisms (SNPs), the simplest form, are single-base differences in the DNA sequence. They occur throughout the human genome at a frequency of about 1 per 1000 DNA base pair.[99] Various different technologies may eventually allow the presence of SNPs to be detected in a large number of genes, creating unique SNP profiles for each individual. In addition to this static aspect, more dynamic effects, such as the effect of drugs on gene expression profiles, can now also be determined using DNA microarray technology.[100]

5.1 Polymorphism and Drug Metabolism

Drug metabolism is one of the most important considerations in pharmacokinetic variability and drug safety.[101] CYP isoforms are important for phase I metabolism of drugs, which is mostly, but not exclusively, performed in the liver. Genetic variation in these enzymes is the most intensively studied aspect of pharmacogenetics. A review of the role of CYP polymorphisms in predisposing individuals to ADRs showed that, of the 27 drugs most frequently cited in ADR studies, 59% were metabolised by at least one enzyme with a variant allele associated with reduced activity compared with 7–22% of randomly selected drugs.[102] This suggests that prior knowledge and consideration of an individual’s genetically determined variability in metabolism may promote safer drug use. However, this needs to be proven in practice, as argued in a recent review.[97]

CYP2D6 (P4502D6, also called the debrisoquine/sparteine hydroxylase) metabolises about 25% of hepatically cleared drugs,[73] including antipsychotics and antidepressants. Polymorphisms in this isoform result in either non-functional proteins,[103] or complete deletion of the entire coding region of CYP2D6. Such individuals are termed poor metabolisers and comprise 6–8% of the Caucasian population.[104,105] Slow metabolisers of antipsychotics such as zuclopenthixol, thioridazine and risperidone may be at an elevated risk of adverse effects.[106] Conversely, high activity seen in 1–2% of Caucasians is ascribed to gene duplication (individuals can have between 3 and 13 copies of the gene). Clinically, the ultra-rapid metaboliser phenotype has been implicated as a cause of non-response to antidepressant drug therapy.[106,107] Although genotyping for CYP polymorphisms has not been translated into clinical practice, the identification that a drug is metabolised by CYP2D6 either leads to discontinuation of further drug development, or can be used to warn prescribers by including appropriate warning in the product information. Indeed, a novel agent is unlikely to be further developed if its clearance is judged to be more than 40% dependent on CYP2D6.[97]

CYP2C9 has been well characterised,[108] and has variants which affect the oxidative metabolism of a range of drugs, including phenytoin and warfarin. CYP2C9 is a major metaboliser of phenytoin, accounting for 80–90% of its 4′-hydroxylation.[109] It has been estimated that approximately 4–16% of Caucasians are heterozygous and less than 1% homozygous for the CYP2C9*3 allele. The homozygous genotype confers a poor metaboliser phenotype, and phenytoin toxicity (CNS intoxication) has been reported in homozygous CYP2C9*3 individuals, despite administration of a modest daily dose.[110]

An example of a genetic polymorphism that has had a clinical impact is that observed with the gene encoding thiopurine S-methyltransferase (TPMT). This enzyme catalyses the methylation of azathioprine and mercaptopurine, both associated with haematotoxicity in about 15% of patients.[111] Approximately 1 in 300 individuals are TPMT-deficient, while 6–11% have an intermediate phenotype and 89–94% show a high methylator phenotype.[112] It has been demonstrated that TPMT deficiency is associated with severe myelosuppression in patients given standard doses of thiopurines, while those with intermediate activity are more susceptible to adverse effects, which can be prevented by dose reduction.[113,114]

5.2 Polymorphism and Pharmacodynamics

Polymorphisms can also affect the receptor or drug target, leading to altered therapeutic responses, or ADRs. For example, stimulation of β2-adrenergic receptors on airway smooth muscle cells leads to G-protein-linked muscle relaxation. One of the known polymorphisms in the receptor gene sequence causes an enhanced down regulation, reducing the therapeutic benefit in the patient.[115,116] Drug-induced long QT syndrome (LQTS) may be more important in patients with a genetic predisposition. Patients in whom this is observed have been identified as having underlying mutations in LQTS genes.[117] The gene variants are located in the coding regions of the sodium and potassium ion transporters. However, this has been studied in very small numbers of patients and much larger studies will be required before this has any clinical impact in terms of improvement of drug safety.

Knowledge of pharmacogenetics is likely to improve the safety of drugs and benefit public health. However, pharmacogenetic knowledge must be applied on a large scale in order to show clinical effectiveness. One of the most important barriers to the routine application of this technology is its cost effectiveness. Large, prospective studies which provide robust evidence of the cost effectiveness of pharmacogenetics are required.[97] Perhaps such evidence might stimulate the development of relatively simple genotyping technology for large-scale application.

6. Postmarketing Drug Safety

Given the limitations of pre-marketing clinical trials, it follows that the registration of a new drug and its subsequent introduction to the general public ought to mark the beginning of a new phase, or continuation of its clinical development. Postmarketing studies (or phase IV clinical trials) involve a range of activities including spontaneous reporting of suspected ADRs (e.g. the UK Yellow Card Scheme), pharmacoepidemiological studies (e.g. case control and cohort studies) and event monitoring (e.g. the UK prescription event monitoring [PEM] scheme). The importance of these postmarketing activities is exemplified by changes in dose administration regimens after introduction of the drug onto the market. Two recent studies examined the extent to which the post-licensing dose administration levels deviated from the original at launch.[118,119] One study reported 115 instances of changes to the defined daily dose (DDD) between 1982 and 2000. Of these, about 60% were reductions relative to the initially designated DDD. It was noted that drugs registered in the last decade were more likely to have a decrease in the DDD compared with the preceding years, and cardiovascular drugs had the most DDD changes. Antibacterials were most likely to undergo dosage increases, and these varied widely across Europe; a trend attributable to variations in national policy and development of resistant strains.[118,120] In a similar study of changes to labelling instructions after licensing by the FDA, 79% of drugs underwent a reduction in the drug dosage. Additionally, compared with the first 5-year period studied, drugs approved in the last 5-year period (i.e. the most recently approved) were about three times more likely to incur a dosage change, despite the lesser market time exposure.[119] This study also reported a 69% decrease in the length of time after marketing for the dosage change to occur. Taken together, these results suggest poor dosage selection during the pre-marketing clinical trials. However, this is unlikely to be the only reason. It is possible that more stringent PMS of drug safety contributed to the increase in the frequency of dose changes. In fact, the majority of changes were for reductions in doses, indicating a safety consideration. The high doses selected during the early clinical trial phases are often used for subsequent studies in order to prove the efficacy of the drug, and clinical trials are not capable of fully assessing the drug safety profile, given the limitations discussed. Therefore, the dosage of the drug at launch may well be high but suboptimal in terms of benefit-risk ratio when applied to the general population.

The introduction of captopril in the early 1980s, and the subsequent modification of the dose soon after, is a good example. Captopril was introduced as the first orally active ACE inhibitor for the treatment of severe hypertension, or hypertension resistant to the then current therapy.[121,122] Early use of this drug provided for dosages of 75mg, increased weekly up to 450mg (all in three divided doses).[123] Some study protocols had doses of up to 1000 mg/day in four divided doses.[124] This led to a high incidence of adverse effects associated with captopril, including maculopapular rashes, taste disturbances and proteinuria.[121,124] Postmarketing studies of captopril revealed several therapeutically significant characteristics; captopril 37.5 mg/day was just as effective as 150 mg/day.[125] When given in daily doses of ≥450mg, the incidence of rash and taste disturbances was 10% and 7%, respectively. However, in a large postmarketing study in which 66% of patients received daily doses of ≤150mg, the frequency of rash and taste alteration was lower at 5% and 4%, respectively.[123] Furthermore, the addition of a diuretic had a synergistic hypotensive effect. This combination could avoid the use of higher doses (of both captopril and diuretic), reducing the dose-related adverse effects.[123,126,127] Most recently, the results of a PMS involving more than 30 000 patients receiving captopril showed that only 4.9% of patients reporting an adverse event required discontinuation of therapy.[128] Postmarketing studies were thus instrumental in improving the safe use of captopril.

Of course, the identification and selection of an optimal dose for a given drug does not eliminate the occurrence of adverse effects of the drug, and the identification and detection of these is a key function of pharmacovigilance activities.

6.1 Pharmacovigilance

The WHO defines pharmacovigilance as “…the activities involved in the detection, assessment, understanding and prevention of adverse effects or any other drug related problems…”.[129] These activities actually span the whole clinical phase of drug development, and can enhance the prediction of adverse drug effects, and even design appropriate trials at the outset of clinical development.[130] However, pharmacovigilance is generally recognised as a postmarketing drug safety surveillance activity.[131]

6.2 Adverse Drug Reactions (ADRs)

The most common classification of ADRs is that which identifies reactions that are dose related (type A) or non-dose related (type B).[132] Type A reactions are usually augmented effects of the drug action. They are usually predictable from the pharmacology of the drug, for example, constipation with an opioid or sedation with the first-generation antihistamines. This type of reaction is common and not normally life-threatening. Type B reactions are bizarre, and not readily predicted from the pharmacological action of the drug. Examples include agranulocytosis with clozapine and toxic epidermal necrolysis with sulphonamides. Type B reactions are rare and more likely to be life-threatening.[133] There are other groups in this system of classification, but these may be considered as subclasses or hybrids of type A and B ADRs. These include type C (chronic reaction, time- and dose-related) and type D (delayed reaction, related to time).[134] An alternative classification system, which considers the dose-relatedness, timing and patient susceptibility, has recently been proposed.[135]

6.2.1 Type A Reactions

Type A reactions account for about 80% of all ADRs. This group of ADRs is readily characterised during the clinical development of a drug, given their predictability from their pharmacological action. Selection of the right dose range may reduce their occurrence or severity, and predisposing factors such as renal insufficiency or liver problems can be identified.[136]

6.2.2 Type B Reactions: Metabolism, the Immune System and ADRs

Type B reactions are more difficult to identify during clinical trials, and the mechanism by which they occur has been the subject of numerous reviews.[137141] It is accepted that many type B reactions are immunologically mediated, although metabolic activation to chemically reactive (toxic) metabolites and direct toxicity, also plays an important role.

Toxic Metabolites and ADRs

Drugs are metabolised by a process involving initial oxidation or reduction of the molecules into hydrophilic intermediates (phase I reactions). These are subsequently conjugated with a large polar group such as glutathione (phase II reactions) and excreted. The products of the phase I reactions may be chemically reactive and have the potential to bind to cellular structures, causing cell damage and toxicity. The formation of these chemically reactive metabolites, or drug bioactivation, is mediated mainly by CYP enzymes.[140] Phase II reactions are, therefore, important in stabilising phase I reactive intermediates (bioinactivation). This bioinactivation step is the mechanism by which cellular damage due to metabolism is checked; thus, the balance between the drug bioactivation and bioinactivation is important in preventing toxicity by metabolic reactive intermediates. This balance may be adversely affected by disease, immune dysregulation or, indeed, concomitant drug therapy. An illustrative example is the increase in liver toxicity caused by paracetamol (acetaminophen) through chronic alcohol consumption. During its metabolism, a small fraction of paracetamol is converted to a chemically reactive quinoneimine by CYP2E1.[142] This is normally inactivated by conjugation with glutathione. However, alcohol depletes cellular levels of glutathione, and induces CYP2E1, leading to hepatocellular damage in alcoholics at doses of paracetamol that do not normally lead to hepatic dysfunction.[142]

The importance of clinical pharmacological studies in defining and subsequently preventing toxicity mediated by toxic metabolites can be illustrated with respect to dapsone. Dapsone is used to treat a wide variety of diseases, including leprosy, malaria, inflammatory disorders involving polymorphonuclear leucocyte infiltration, rheumatoid arthritis, dermatitis herpetiformis as well as infections caused by Pneumocystis jiroveci (previously P. carinii) and Toxoplasma gondii.[143,144] Administration of dapsone is associated with both dose-dependent toxicity towards red cells, usually in the form of methaemoglobinaemia, and idiosyncratic white cell toxicity, such as agranulocytosis.[145]

The toxic effects of dapsone are thought to be mediated by its hydroxylamine metabolite, formed either by CYP- or myeloperoxidase-catalysed oxidation of the drug.[146,147] Using a two-compartment model in which human target cells, either white or red blood cells, were separated from a drug metabolising system by a semi-permeable membrane,[148,149] it has been demonstrated that human liver microsomes were capable of generating dapsone hydroxylamine in one compartment, which was stable enough to diffuse into the second compartment and cause toxicity. Furthermore, it was shown that the N-hydroxylation of dapsone could be reduced by two known inhibitors of drug oxidation, ketoconazole[149] and cimetidine,[150] with a resultant decrease in the toxicity observed. The in vitro observation that cimetidine reduced the haemotoxicity associated with dapsone was then applied in vivo. Coadministration of cimetidine (400mg three times daily) with dapsone reduced methaemoglobinaemia, whilst increasing both peak concentrations and plasma area under the concentration-time curve, after a single dose (100mg) of dapsone in volunteers[151] and in patients on long-term dapsone therapy (50–350 mg/day).[152]

The Immune System and ADRs

The immune system is considered an important mediator of type B ADRs. The central thesis of this mechanism is the hapten hypothesis.[153] Haptens are low molecular weight molecules capable of eliciting an immune response only when coupled to a carrier. Drug molecules of low molecular weight or their metabolites are not normally antigenic; however, bioactivation by phase I reactions as described can lead to formation of stable, covalent linkages with endogenous proteins.[154] Some drugs, such as the β-lactams, for example, can readily form covalent bonds with proteins under physiological conditions. Antigen-presenting cells such as macrophages and dendritic cells take up and process the hapten-protein conjugate, migrate to regional lymph nodes and trigger naive T cell-expressing antigen receptors. Clonal expansion leads to the production of long-lived antigen-specific memory T cells.[137] Subsequent exposure to the antigen or metabolite-protein conjugates triggers an immune response. Other studies have shown metabolism-independent T-cell stimulation,[155] where the parent drug can directly stimulate T-cell proliferation, without the need for antigen processing.

Another theory used to explain immune-mediated ADRs is the danger hypothesis.[141] The premise of this theory is that the presence of ‘danger signals’ determines the response of the immune system to the presentation of an antigen.[156] The first of these signals is the presentation of a recognised antigen, or hapten-protein conjugate in this case, and the subsequent interaction with T cells. The second is the presence of co-stimulatory molecules present on antigen-presenting cells like macrophages and dendritic cells, with production of proinflammatory cytokines. The last signal required is the action of polarising cytokines that act directly on T cells and mediate a humoral or cell-mediated response.[141] As outlined above, the haptenation of proteins by reactive metabolic intermediates are the source of the first signal. The other signals may be provided by cellular damage mediated by reactive intermediates.[140] Although the precise mechanism by which immune reactions occur is under considerable investigation, it is clear that ADRs can occur as a result of interplay between the immune system and drugs.

Thus, unpredictable ADRs with complex aetiologies, significant morbidity and mortality, as well as adverse economic implications, are prevalent in the postmarketing life of a drug. Despite the best efforts of investigators to identify the optimal drug dose in the right patient during pre-marketing development, systems are required postmarketing to manage risk and minimise harm.

6.3 Spontaneous ADR Reporting System

The most convenient system of postmarketing ADR detection are the spontaneous reporting systems. Iatrogenic catastrophes, most notably the thalidomide tragedy of the early 1960s, stimulated the development of these systems. In the UK, the Yellow Card Scheme was created in the 1964. This permits any suspected ADRs to be reported to the UK Medicines and Healthcare products Regulatory Agency (MHRA) originally by all doctors, dentists and coroners. More recently, pharmacists[157] and nurses[158] have also been authorised as reporters. The main strength of the Yellow Card Scheme, as with any spontaneous reporting scheme, is the ability to detect very rare or unexpected ADRs. This stems from the fact that a large population is monitored for the life of the drug. For example, the cardiac valvular disease caused by fenfluramine was discovered after 24 years of marketing, mainly as a result of a sudden increase in its use as an anorectic agent.[159] The Yellow Card Scheme has identified important ADRs like uveitis caused by metipranolol, visual field defects by vigabatrin and QT interval prolongation by cisapride.[160,161] The major draw back of spontaneous reporting systems is (paradoxically) its reliance on healthcare professionals to report ADRs; under-reporting remains a major problem of this system.[162] Reasons for this include time constraints, poor knowledge of the system and, most notably, difficulty in establishing a causal link between the reaction and a drug.[163] The Yellow Card Scheme has broadened its reporter base and made the reporting cards more widely available, most recently introducing an electronic reporting system.[161]

6.3.1 Establishing ADR Causality

The formal establishment of a causal link between a suspected drug and a reaction is a key domain of the seasoned clinical pharmacologist. The Yellow Card Scheme is designed to collect suspected ADRs — it is not the responsibility of the reporter to decide whether or not the suspected drug caused the ADR. However, the symptoms of an ADR may be similar to a disease, so the clinical training of the reporter is important in raising suspicion.

Clearly, a detailed knowledge of the pharmacology of the suspected drugs, and that of the possible aetiologies of the clinical event, is important in establishing the likelihood of causation. The inherently subjective nature of this procedure has led to the development of standardised causality assessment algorithms.[164167] The lack of a ‘gold standard’ makes it difficult to fully validate the results of these processes;[168] however, a high level of agreement between different assessors has been observed with some methods.[164,165] The need for further development of causality assessment methods is also recognised.[169,170]

6.4 Event Monitoring

Another drawback of the spontaneous reporting system is that the total number of people exposed to the drug is not known. The UK PEM programme is a postmarketing observational cohort study.[171] This scheme, run by the Drug Safety Research Unit (DSRU), aims to identify all patients prescribed the selected drug by general practitioners (GPs) in England. Prescriptions for the drug are received in confidence by the DSRU, who then request details of any event experienced by the patient after about 6 months. The detailed method has been described elsewhere.[171,172] The advantage of this method is that the reporter does not even have to suspect a drug of causing the clinical event. Additionally, the GPs are prompted for the reports, leading to higher response rates (about 58%[172]). Furthermore, an estimate of the total number of individuals exposed to the drug is obtained, allowing for calculation of incidence rates. The main limitation is the lack of data on compliance (whether the prescribed drug was actually administered).

It must be recognised that both the Yellow Card and PEM schemes are essentially hypothesis-generating tools, in that they are methods that initially highlight significant associations between drugs and adverse effects of drugs. Formal epidemiological studies such as case-control or cohort trials then need to be performed to confirm or refute the hypothesis.

7. Conclusion

From the moment a decision is made to develop a pharmacologically active agent into a drug, a prime objective is the establishment of a dose that achieves maximum efficacy and minimal adverse effects. The design of classical pre-marketing clinical trials limits their capacity to establish a dose that is appropriate for all types of patients.

Several clinical pharmacology disciplines help gear the dose selection towards a more individualised approach; PK-PD modelling and population pharmacokinetics allow factors that influence drug response and adverse effects to be investigated, and pharmacogenetics may bring about genotype-targeted therapeutics. Meanwhile, methods to monitor ADRs have immense importance in protecting patients from the adverse effects of drugs. This and other PMS activities require the input of the clinical pharmacologists in order that drug safety is considered not only during the development of the drug but during its postmarketing lifetime.