Keywords

1 The Science of Pharmacometrics

1.1 Introduction

Pharmacometrics is the scientific discipline which deals with the quantitative description of disease processes, drug effects, and the variability in drug exposure and response. Mathematical and statistical principles, along with trial information, are utilized to interpret pharmacological observations obtained from preclinical to clinical stages of drug development. Moreover, the pharmacometric approach integrates information across the various stages of drug development to ultimately influence therapeutic and regulatory decisions. In essence, the science of pharmacometrics is tailored to improving the efficiency and success in drug development.

The interdisciplinary science of pharmacometrics involves the collaboration of basic pharmacology principles, clinical pharmacology (pharmacokinetics/pharmacodynamics, PK/PD), pathophysiology, statistics, and computational techniques. The incorporation of mathematical and statistical models provides a bridge across the disciplines to explain pharmacological behavior and the inherent variability in drug response, for both desired and undesired effects. A compilation of techniques is used in pharmacometric analyses that primarily involve the modeling and simulation of data. These techniques include population pharmacokinetic analysis, exposure–response evaluation for drug efficacy and safety, clinical trial simulations, and disease progression modeling.

Several researchers have discussed the increasing importance of the use of modeling and simulation for enhancing drug development [15]. In oncology, PK/PD and physiological modeling and simulation are increasingly used to improve the understanding of the intricate relations of biological and physiological parameters that affect drug behavior at a molecular level. Moreover, the use of information obtained from the modeling and simulation exercises have been incorporated in clinical trial simulations that ultimately yielded plausible trial outcomes. A comprehensive text on pharmacometrics has been recently published, detailing the theory and different types of analyses performed with modeling and simulation [6].

The following chapter summarizes the theoretical concepts and methodologies employed in pharmacometric analyses during drug development and regulatory review. Specific examples are presented that successfully incorporate these pharmacometric principles in various aspects of drug development.

1.2 General Applications

The value of pharmacometric principles can be exemplified at all stages of drug development and during eventual regulatory review. The techniques used for data analysis creates the ability to translate information across the various stages. A major tenet of pharmacometric application to the drug development process has been eloquently described by Sheiner, coined the “learn-confirm” approach [7]. He asserts that the process of drug development should be science-driven by learning from experience and confirming what has been learned. This approach depends on the application of pharmacometric modeling and simulation to progress through the learn-confirm cycles.

The subsequent steps in the drug development process are devised incorporating the knowledge obtained from already acquired data and an explicitly defined model. Data are collected and pharmacometric models are built to describe data and confirm prior knowledge about the drug candidate. Modeling and simulation is then applied to acquire knowledge from new data to predict future outcomes for safety and efficacy. This process allows making informed decisions about future experiments and trial design.

Specifically, the potential applications of pharmacometric analyses range from candidate molecule selection, identification of biomarkers and surrogates, dosage/regimen selection and optimization, prognostic factor evaluation, benefit/risk evaluation, to clinical trial forecasting. A schematic demonstrating the various applications throughout the drug development stages is illustrated in Fig. 1. Pharmacometric methods provide a coherent, scientifically based framework to maximize the use of information and efficiency of decision making during the drug development and approval process.

Fig. 1
figure 1

Applications of the “learn–confirm” approach in drug development. Adapted from Meibohm et al. (2002)

1.2.1 Optimizing Antineoplastic Dosage Regimens

In oncology, the main purpose of designing an optimized dosing regimen is to destroy tumor cells and, at the same time, minimize the adverse effects of chemotherapy. Ideally, the fine balance of risk versus benefit for chemotherapy is explored via the administration of different dosing regimens. However, the exploration of several dosing strategies in clinical trials may be costly, unfeasible, and, in some cases, unethical. Simulation of chemotherapy exposures can be used to investigate different dosing schemes to ultimately select the optimal dosing regimen.

For several drugs, a single dosing scheme may not be able to achieve target exposures in majority of patients. This may necessitate dose individualization and therapeutic drug monitoring (TDM). Upon defining the concentration–effect relationship, the use of TDM can improve the clinical use of antineoplastic drugs, most of which have very narrow therapeutic indices and especially variable pharmacokinetics. Pharmacometric modeling can help realize this need and can also provide recommendations for the TDM strategy [8].

One of the most important uses of modeling and simulation is the development of a well-defined exposure–response relationship to support the approval of a dosing regimen not directly investigated in clinical trials. For majority of oncology therapies, the proposed labeling includes dosing regimens studied in registration trials. An exposure–response model can be used to explore intermediate doses that were not studied in clinical trials. This type of analysis, in conjunction with risk–benefit evaluation, may yield a regimen which may offer similar effectiveness with minimized toxicity. In optimizing the dosage regimen, it is important to note that further extrapolation to dosing regimens outside the studied dose range may not be appropriate. Nonetheless, a defined exposure–response model may help guide the design of additional clinical trials involving the antineoplastic therapy.

1.2.2 Future Trial Design

The two most common causes of failure in the late stage drug development are the lack of efficacy and unwarranted toxicity of the oncology agent investigated. Unsuitable trial design and a lack of integration of prior knowledge are often reasons for the unsuccessful result of these trials. This knowledge gap restricts the information needed to inform the coherent design of clinical trials in human patients. Modeling and simulation provides a path to incorporate prior knowledge and offers a promising way forward to rationally design hypothesis-testing clinical trials.

Quantitative analyses using trial models and clinical trial simulations are useful for strategically designing oncology registration trials. This tailored, knowledge-driven, approach may provide decisive insight into aspects such as dosing (e.g., the number and separation of dose levels), trial design (e.g., adaptive vs. fixed, crossover vs. parallel design), determination of sample size and power (e.g., type I and II error), and evaluation of drug interactions and disease effects [9, 10]. Keeping these factors in mind, clinical trial simulation can aid in the realization of a rational clinical drug development program.

For example, prior information from the early clinical stages allowed for the development of an exposure–response model for degarelix, a gonadotropin-releasing hormone antagonist for the management of prostate cancer [11]. Modeling and clinical trial simulation led to suggestion of a new optimal dosing regimen for use in the registration trial. This integration of knowledge led to the eventual approval of the drug for use in prostate cancer patients [12]. In hindsight, the effectual pharmacometric analyses aided in the rational dosing and design of the trial, ultimately improving the potential for success. The degarelix example is detailed in Sect. 5.1.

1.2.3 Quantitative Disease–Drug–Trial Models

In addition to understanding the drug properties and exposure–response relation, knowledge of the time course of the disease status can aid in the clinical trial design and oncology drug development. Disease–drug-trial models are mathematical expressions of the time course of biomarkers and clinical outcomes, placebo effects, pharmacological effects of drugs, and trial execution characteristics [13]. These expressions can be used in concert to envisage the time course of disease in treated and untreated conditions. In turn, simulations using disease–drug-trial models may be able to predict the effects of various treatment options, and the corresponding consequence, on the future course of the disease process. The entirety of information can be used to develop more efficient, and hopefully successful, clinical trials.

Disease models that quantify the relevant biological system in the absence of drug are further discussed in Sect. 3. Drug models are intended to quantitatively characterize the pharmacology and exposure–response relationship for both efficacy and safety of drugs. In order to integrate information across the development stages, it is imperative that early studies focus on bridging exposure–response across patients, healthy subjects, animals, and in vitro results by performing adequate dose-ranging studies. In turn, the bridging of exposure–response across patients and healthy subjects can aid in designing better future trials for a potential oncology therapy.

Trial models account factors that determine patient characteristics and behaviors, such as inclusion/exclusion criteria, protocol adherence, premature discontinuation, and interdependence (covariance) of baseline variables. All these factors can appreciably influence clinical trial outcomes and should be considered prior to future trial design. Incorporation of these factors during the modeling and simulation of a clinical trial can contribute to providing a better foundation for designing future trials.

1.2.4 Prognostic Factors

In addition to dose-ranging studies, the clinical pharmacology characterization of a new drug involves several studies to identify significant prognostic factors (e.g., body weight, gender, food intake, and hepatic/renal impairment). In oncology, important prognostic factors include patient age, staging of the disease (i.e., tumor size, grade, and location, presence of metastatic disease), and recurrence of the disease. These prognostic factors can help describe the intended study population, help formulate the study objectives, and ultimately influence the treatment strategy. The causal relationship between prognostic factors and the study endpoint may not be readily available from early drug development, but can be simulated from a previously developed drug model. This requires that prognostic factors be accounted for in the study analysis to evaluate results within and across studies.

The docetaxel exposure–response relationship in patients with cancer was successful in identifying a sub-population more prone to toxicity [14]. Results concluded that patients with elevated hepatic enzymes have a 27 % reduction in docetaxel clearance and are at a higher risk of grade 4 neutropenia. This significant finding was the impetus for dosing recommendations in the label for patients with liver insufficiency. The drug development program of docetaxel exemplifies the value added by the incorporation of prospective planning using modeling and simulation into clinical trials.

1.2.5 Special Populations: Pediatrics

The use of pharmacometric analyses has enabled the implementation of PK studies in special populations, where the number of samples to be obtained per subject is limited because of logistic, ethical, and medical concerns. In particular, modeling and simulation has facilitated drug development in the pediatric population. The prevalent application of pharmacometric analyses in pediatric PK studies can mainly be attributed to its capability to analyze clinical trials with sparse PK data collection, which are common features in pediatric studies.

The use of pharmacometric approaches has been encouraged by the regulatory incentives offered for performing pediatric PK studies during clinical development. The FDA offers a 6-month extension on the patent exclusivity for a new drug, once the sponsor fulfills the requirement of the written request to characterize the exposure–response relationship of the drug in pediatrics. When designing the pediatric trial, the integration historical information (i.e., a well-defined exposure–response relationship in adults) can guide study design and analysis for the use of the same drug in pediatrics. Modeling and simulation is an influential tool that can be used to provide reasonable trial design, rational dosing recommendations and useful labeling information in pediatrics when sufficient understanding of adult and pediatric pharmacology is available [15].

1.3 Model-Based Drug Development and Progressive Model Building

The learn-and-confirm paradigm suggests that the model-based drug development (MBDD) process allows the entire base of pertinent prior knowledge to be integrated into decision-focused recommendations for the future [7]. For example, MBDD can use the wealth of knowledge from predecessor drugs with a similar mechanism of action [16] to develop newer therapies in the same therapeutic class of compounds. Moreover, efficacy and safety drug models can be developed based on preclinical data of the new drug to inform study design for early clinical development. Prior clinical experience with structurally similar molecules can also provide information to serve this purpose. The models can be continually updated throughout clinical development, and thus the attributes of the new drug would correspondingly become better defined.

In a MBDD paradigm, models will be both tools and primary aims of drug development programs. Presently, population models are typically developed at the later to end stages of clinical development. A more practical way to economize time and costs to develop models is to update a model as new knowledge is accumulated. The use of a “progressive model building” (PMB) paradigm allows for this continuous incorporation of new information. PMB allows to carry forward knowledge throughout the development of a given drug product. At the same time, PMB provides the ability to separate a big problem into several small components that are easier to solve. However, implementation of this paradigm requires an open collaboration of scientists from all disciplines and an institutional commitment to use the “current” model while designing the next trial.

2 Types of Data and Trial Designs

Throughout the drug development process, individual clinical studies are designed to answer specific questions and elucidate pharmacological attributes of the drug. Oncology trial protocols are based on prespecified standards and plans for types of data to be collected as well as analyses to be conducted. Thus, the trial design determines both the data collection and the data analysis methods.

In the early stages, the design of clinical trials is focused on evaluating the PK characteristics and toxicity profiles of the drug in question, making an attempt to define dose-limiting toxicities and the maximum tolerated dose (MTD). Competing treatment schedules and drug-combination strategies may also be explored during this time. Subsequent to obtaining initial safety and PK data, the drug candidate is evaluated for potential pharmacological activity within the specified population the drug is intended for (otherwise known as proof-of-concept trials). Upon deciding to proceed into the later stages of drug development, the focus of the trial design is to demonstrate efficacy compared to standard therapy in the intended population. During this stage of development, the safety aspects of the potential therapy can be further evaluated. At each stage, it is imperative that prior knowledge is efficiently utilized to design future studies. The quality of data obtained from each investigation compels the type of knowledge gained and the ability to utilize the information. Thus, optimal sampling schemes for exposure and endpoint measurement (safety and efficacy) should be devised as part of the clinical protocols.

2.1 Data

The frequency, schedule, and duration of data sampling govern the type of quantitative information that can be obtained from a trial. Generally, there are two types of data that can be acquired during clinical trials, rich data and sparse data. For PK–PD measurements, data are typically collected from trials conducted in a small number of patients over a short time duration. Usually, “rich data” (i.e., several samples from each subject) is collected under controlled conditions. With this sampling strategy, subject-level data can be analyzed independent of the others, in most cases, and then summarized. This kind of data is the best for elucidating the time course of drug exposure and response for the subsequent building of structural models (see Sect. 4.1 for details). Examples of studies that employ “rich-data” sampling are dose-proportionality studies and bridging studies that are performed to evaluate the impact of prognostic factors (e.g., food, renal/hepatic impairment, etc.) on the PK of a drug. Generally, 10–20 samples per subject are collected in these rich-data experiments.

Conversely, “sparse data” are collected in trials that are conducted to appraise the efficacy and safety of a drug, in a large number of patients and for relatively longer durations. The nature of these larger trials necessitates the infrequent sampling of PK–PD measurements for each individual. This sampling strategy poses a challenge to analyze data from each subject separately. Sparse data are most suited to building statistical models (see Sect. 4.1 for details). Examples of studies that collect sparse data are the late stage pivotal or registration trials. In such trials, relatively few samples (1–3) per subject are collected since obtaining several samples from each individual patient may not be feasible.

2.2 Trial Designs

Trial design in clinical oncology investigations have been summarized and deliberated in several publications [10, 1721]. Specifics of trial design features for oncology drug development are described elsewhere in this handbook. This section provides a general summary of trial designs commonly employed in oncology drug development.

The three most frequently used trial designs are parallel, crossover, and titration. For a parallel study design, subjects are randomized to one of the several treatment options (i.e., placebo/control or different dose levels). While a parallel design will support the estimation of population PK–PD characteristics, individual subject-level characteristics are not easily obtained. In crossover study design, each subject receives a sequence of all treatment options. As this type of trial employs repeated measures within a given subject, this is the most powerful study design for estimating the individual exposure–response relationships. Crossover designed trials are generally longer in duration and may experience carry-over effects from previous treatments, necessitating sophisticated data analysis. Lastly, the titration design employs an incremental increase in dose to patients either until no additional benefit is observed or until dose-limiting toxicity occurs. This design is generally utilized in the initial stages of clinical development and permits the characterization of individual PK–PD parameters.

Trial design can also be governed by the way randomization to treatment is performed. Subjects can be randomized to receive a specified dose or concentration of the test drug or to a particular effect elicited by the drug. Henceforth, such trials are referred to as randomized dose-controlled (RDCT), randomized concentration-controlled (RCCT), or randomized effect-controlled (RECT) trials, respectively. In the case that a placebo control is considered unethical, an active control group can be employed in the trial.

In a RDCT, subjects are randomly assigned different doses of the drug. After randomization, data are collected throughout the trial and subsequently analyzed using appropriate statistical methods. These types of trials are commonly conducted due to the simple execution and analysis of the data.

For RCCT design, a set of target drug concentrations are chosen based on the exposure–response relationship established from prior studies. Using prior information about the drug pharmacological characteristics, target concentrations are chosen and subjects are randomized to one of these pre-specified target concentrations [22]. Such a design necessitates an initial dose-titration period. During this period, the doses that ensure the attainment of concentrations within the specified target ranges (ex.: 5 ± 0.5 μg/L) is identified.

A deviation of the RCCT design is when doses may be prespecified based on a specific demographic variable (e.g., body surface area, BSA). This type of design is commonly performed in adult and pediatric oncology trials in which BSA-adjusted doses are routinely administered. Similarly, in an RECT, subjects are randomly assigned to a prespecified target effect level. In this case, the target effects are chosen based on prior knowledge of the drug’s exposure–response relationship, and the dose is titrated accordingly.

RCCT and RECT designs have similar requirements for implementation. For these trials, it is necessary to utilize prior exposure–response information for selection of the appropriate target concentration or effect ranges. Moreover, trial conduct will be dependent on an efficient and sensitive analytical assay method with a short turn-around time, and sufficient number of formulation strengths to allow for dose adjustments as needed. Unfortunately, very few drug development programs utilize RCCT or RECT designs. This may be due to their relatively complicated execution and data analysis, compared with the RDCT design, as well as the cost of implementing TDM if the drug is approved [23, 24].

3 Disease Models

Model-based assessment of disease progression has become a significant aspect of drug development. Disease progress refers to the trajectory of a disease over time, which can be evaluated by observing the time course of a biomarker or other clinically relevant measure. This measure should reflect the status of the disease or the clinical status of a patient. A disease model is a mathematical representation of a biological system, in the absence of therapy, and attempts to quantify the time course of the disease. There are three chief sub-models that capture relevant aspects of disease modeling, primarily the relationship between biomarkers and clinical outcomes, the natural disease progression, and the placebo effect. In addition, there are three general approaches that can be applied to building any disease model. These are systems biology, semi-mechanistic, and empirical modeling. The main features of these approaches are summarized in Table 1.

Table 1 Comparison of systems biology, semi-mechanistic, and empirical disease models

3.1 Biomarkers and Clinical Outcomes

Biomarkers are commonly used as outcomes in clinical trials in lieu of the actual clinical endpoints, especially when clinical endpoints occur after prolonged periods of time. Therefore, the characterization of the relationship between biomarkers and clinical outcomes for a particular pathological condition is a vital aspect of disease modeling. Such models can then support trial design optimization and risk projection based on biomarker information. Systems biology models are very useful for this purpose [25]. Similar to physiologically-based models, systems biology models are based on the understanding of underlying biological system. The generated models attempt to mathematically represent the system at the molecular level, with an ability to account for pathological perturbations to the system. The model parameters are estimated from multiple detailed in-vitro and ex-vivo experiments.

Departing from complexity, empirical and semi-mechanistic models are generally data driven and do not consider details of the underlying and associated biological systems. Semi-mechanistic models simplify the system sufficiently enough to be able to describe the available data adequately. Empirical disease models are mathematical expressions used to interpolate between observed data and seldom relate to the underlying biology. Nevertheless, such simple models are useful and have been employed in making go/no-go decisions and in designing pivotal trials. The empirical parametric hazard model that describes the relationship between the change in tumor size and survival is one such example that is used for this purpose. All types of models are useful, but it depends on the question being posed during development.

3.2 Natural Disease Progression

Natural disease progression modeling attempts to describe the change observed in the clinical outcome over a period of time. Drug treatments can modify the natural progression of the disease, and such models can provide insights into the time course and management of several diseases [26]. For example, the natural progression of Alzheimer’s disease as measured by the Alzheimer’s Disease Assessment Scale–Cognitive score (ADAS-COG) has been described using an empirical linear model [27]. In oncology, the time course of tumor growth has been characterized in patients with non–small cell lung cancer using a modified Gompertz model [28]. Using this model, in conjunction with their drug model, the investigators were able to predict tumor size changes during and after multiple cycles of chemotherapy. Mechanistic models are also being studied since they allow data collected under varied experimental conditions to be analyzed simultaneously. A mechanistic disease progression model for arthritis in rats has been proposed [29].

3.3 Placebo Effect

The effect observed in a placebo group refers to the psycho-socially induced biochemical changes in a patient’s brain and body that in turn may affect both, the natural course of a disease, and response to therapy [30]. Although the placebo-effect is not directly associated to the disease, it can considerably impact outcomes observed in trials. For disease conditions that are measured symptomatically, such as pain and depression, this type of phenomenon is commonplace. Therefore, modeling the magnitude and time course of placebo effects can be valuable while projecting net drug effects and also aids in estimating sample size during trial design. Recently, a model that describes the time course of the Hamilton Depression Rating scale (HAMD-17) clinical score in the placebo arms of antidepressant trials, combined with a dropout mechanism, has been developed [31]. This model provides new insights on the validity of the results of several longitudinal registration trials currently used for new drug products.

For oncology trials, the placebo effect is not generally considered to be a significant factor in tumor response. In a review of 37 oncology trials, it was found that a placebo effect was observed with improvement in symptoms such as pain and appetite but rarely associated with positive tumor response [32]. Nonetheless, modeling of the placebo effect for trials associated with the treatment of symptomatic measures (e.g., pain) would aid in trial design of treatments intended to alleviate these associated problems.

4 Types of Pharmacometric Analyses

4.1 Conceptual Framework

Population PK–PD models involve both structural and statistical model components. Structural models account for the population parameters of the model or “fixed effects” and are deterministic in nature. A complete population PK–PD model incorporates four structural components including (1) a PK model, (2) a disease progression model, (3) a PD model, and (4) a covariate model. The average population parameters obtained from these models constitute the “fixed effect” portion of the population model and generally define the average value for a parameter in a population and/or the average relationship between measurable patient factors and pharmacokinetic and/or pharmacodynamic parameters. For example, parameters such as the typical value of systemic clearance for a 70 kg individual and the mean potency (i.e., EC50) of a drug are classified as fixed effects. These components of the model do not account for the inherent variability seen with at the individual and observational levels.

To account for this variability, stochastic statistical models are generally implemented in population PK–PD models to describe the “random effects” seen with observational data. Three different statistical models within a population model are used to describe variability: between-subject variability (BSV) model, between-occasion variability (BOV) model, and within-subject variability (WSV) model. BSV, or interindividual variability, signifies the random unexplained differences between different subjects, while BOV signifies the deviance in an individual between different occasions. WSV, or residual variability, measures the remaining unexplained variability when all other sources of variability are accounted for. Also known as intraindividual variability, WSV may depict model misspecification and/or assay measurement error.

The parameters obtained from these statistical models are population models that quantify the random, unknown variation. The primary assumption with the random effect models is that the between-subject and between-occasion errors (η) are normally distributed with a mean of zero and a variance ω 2. Moreover, the within-subject or residual errors (ε) are normally distributed with a mean of zero and a variance σ 2.

In PK–PD, models that attempt to account for both fixed and random effects together are called nonlinear mixed-effect models. The concept of the mixed-effect model is illustrated in Fig. 2. In this example, consider a one-compartment model where the drug is given as an intravenous bolus and the volume of distribution (V) is identical in every individual (no BSV for V). Then, the concentration in the “ith” subject at the “jth” time point can be described using the following equations:

Fig. 2
figure 2

Basic framework of nonlinear mixed-effect modeling

$$ {C}_{ij}=\frac{ Dose}{V}.{e}^{-\frac{C{L}_i}{V}.t}+{e}_{ij} $$
(1)
$$ C{L}_i=C{L}_{POP}+{h}_{CL,i} $$
(2)

In which, CL i is the estimated clearance of the “ith” subject, CLPOP is the estimated population mean clearance, η CL,i is the difference between the population mean and individual clearances, and ε ij is the residual error of the “jth” sample of the “ith” subject.

4.2 Population Analysis Techniques

A major objective of population analyses is to estimate population mean values of pertinent model parameters (i.e., mean CL and V) and variances (i.e., BSV for CL and V) as well as the unexplained, residual variability. Another goal of population analyses is to explain the BSV observed using patient-specific covariates such as body size, age, gender, and disease severity. Importantly, this type of analyses helps in estimating the individual parameters (such as CLi and V i ) required to impute concentrations to perform PK–PD analysis and other simulations at a subsequent stage of analysis.

The most frequently employed methods for performing a population analysis are naïve pooled or naïve averaged analyses, two-stage analysis, and nonlinear mixed-effect (NM) analysis. The main attributes of these methods are summarized in Table 2.

Table 2 Main features of the common population analysis techniques

In naïve pooled analysis, individual observations from all subjects are pooled (as if all the data came from a single giant subject) to obtain average PK parameters. In essence, a model without between-subject variability (BSV) and between-occasion variability (BOV) is fitted to the pooled data from all individuals. The naïve averaged analysis is a variation of this method which involves determination of the mean of the data at each time point. Both methods provide only the central tendency of the model parameters and the random effects are not estimated. These methods are used more often for preclinical data and are appealing because of their simplicity. On the other hand, since interindividual variability is not estimated and cannot be accounted for using covariates, the potential of naïve pooled analyses is very limited.

In two-stage analyses, the first stage estimates average parameters for each subject from the individual observations, while the second stage involves the estimation of the population mean and variance of the parameters, after adjusting for covariates, if necessary. In this second stage, relationships between patient covariates and parameters are explored. Estimates of both the central tendency and the interindividual variability can be obtained reasonably well. This method of analysis requires sufficient samples per subject to be collected (generally greater number of observations than the number of model parameters). This method assumes that the individual parameters estimated in the first stage are known without any uncertainty, which may not hold true. Moreover, this method of analyses is unable to model sparse observational data and concentration (or dose)-dependent nonlinear processes, which is a serious drawback.

In the nonlinear mixed-effect analysis, data from all individual subjects are simultaneously modeled to yield both population mean parameter and variance estimates. Since both stages of the two-stage method are executed in one step, the nonlinear mixed-effect technique is otherwise known as the “one-stage” method. Subsequent to this one-stage optimization, individual parameters are estimated. This type of modeling is the most robust technique for analyzing both experimental and observational data and does not share the disadvantages of the other aforementioned methods. A primary advantage of the nonlinear mixed-effect method is its capability to conduct meta-analyses, which are valuable in summarizing data across a drug development program. Disadvantages of this analysis method include the necessity of sophisticated software, requiring special training for its use, and that analysis using complicated models can be time consuming.

4.3 Model Qualification

All models are required to be qualified and deemed credible for further utilization. The term “validation” implies a procedure of paramount robustness and is generally not applicable to population PK–PD models. It is the simple fact that the true model and its parameters are not known which discourages the use of the word “validation” for such models. Therefore, the term “qualification” may be more suitable.

Prior to the commencement of any model building, the purpose for which the model is being developed should be clearly specified. A model and its corresponding set of parameters are deemed ‘qualified’ to perform a particular task if they satisfy certain pre-specified criteria. Various methods exist for exploring these criteria, many of which are graphical or statistical assessments of the observations in relation to measures of the model prediction. Application of a predictive check to a model and its parameters along with Monte-Carlo simulations is one of the effective methods used for model qualification [3336].

Based on the purpose of the model, qualification techniques can evaluate the descriptive capacity and the ability for extrapolation of the given model. Adequate description of the experimental data will ensure that the proposed model and its parameters are qualified to make trustworthy inferences, within the range of the data studied. Routine diagnostic tests such as goodness-of-fit plots, summary statistics, and precision of the parameter estimates are generally used throughout the modeling process to improve and ultimately qualify a model.

Importantly, the physiological interpretation of model parameters is a significant aspect of model qualification. The model and its corresponding set of parameters should have a conceptual and physiological basis to perform the specified task on which the model was proposed. In addition, the credibility of the model and parameters should be ascertained and deemed satisfactory to a panel of subject matter experts. It is essential to note that there is no prescribed means of assessing whether a model can be used for extrapolation. The credibility of the model, i.e., whether the model was derived from plausible physiological principles that appear reasonable to a panel of experts, is important. Thus, a model may be considered qualified to predict beyond the range of the data used for building the model, if the descriptive capacity of the model is acceptable and the model and its corresponding parameters are credible.

5 Case Studies

Pharmacometric analyses have been used at various stages of the drug development process in oncology. We present several case studies where such analyses have been employed and have had pragmatic value in decision making. Cases include drugs used for, or in conjunction with, chemotherapeutic agents. Table 3 summarizes all cases while a few selected cases have been elaborated further.

Table 3 Summary of several case studies where pharmacometric analysis has had an impact on decision making during different stages of drug development

5.1 Degarelix: Optimizing a Dose for Prostate Cancer

Degarelix is indicated for the treatment of advanced prostate cancer. During clinical development, the primary endpoint used in clinical trials was testosterone < 0.5 ng/mL between day 28 and 1 year in 90 % of patients. The dosing goals were to suppress testosterone levels by day 28 of treatment initiation in at least 90 % of the patients and maintain this suppression through 1 year of therapy. The sponsor conducted five early and late phase dose-finding clinical studies but was unable to finalize an optimal dosing regimen. An end-of-phase 2A meeting was arranged between the FDA and the sponsor in March 2005 to discuss a better drug development plan for degarelix.

The aim of the pharmacometric investigation was to determine a rational dosing regimen that would maximize the effectiveness of degarelix in advanced prostate cancer patients. Population analysis was conducted to develop an exposure–response model for degarelix based on the five dose-finding studies conducted by the sponsor. The FDA suggested alternative dosing strategies and clarified the regulatory expectations of the NDA. For initial suppression of testosterone levels by day 28, a higher loading dose requirement was explored. A lower maintenance dose was derived to maintain the testosterone suppression through 1 year of drug therapy. Using a mechanistic PK–PD model and extensive clinical trial simulations, an optimal dosing regimen was suggested for the registration trial. All the pharmacometric analyses were conducted by the sponsor itself, under the guidance of the FDA. The model-based regimen was then evaluated in a registration trial that resulted in positive outcomes and led to the approval of degarelix for this indication.

Degarelix was approved for use in advanced prostate cancer based on a registration trial that employed a dosing regimen that was selected via modeling and simulation, which several prior studies failed to derive [11]. Trials in prostate cancer patients are challenging and costly and early interaction between the sponsor and the FDA enabled more cost-efficient drug development and a smoother review process.

5.2 Busulfan: Determination of Dosing for Pediatric Patients

Busulfex (an intravenous formulation of the drug busulfan) is used in combination with cyclophosphamide as an immunosuppressive conditioning regimen for bone marrow ablation prior to hematopoietic stem cell transplantation. The drug was initially approved for use in adults with chronic myelogenous leukemia. The dose-limiting toxicity associated with busulfan is potentially fatal hepatic venoocclusive disease (HVOD). Clinical studies suggested that a therapeutic window of 900–1,500 μmol/L/min in adults was appropriate to balance occurrence of HVOD and leukemic relapse and failure to engraft. The FDA issued a written request (WR) to the sponsor to determine the PK of busulfan in pediatrics (aged 4–17 years) and the optimal dosing regimen in this population that would achieve target exposures.

Using modeling and simulation, the investigation sought to determine an appropriate dosing strategy for busulfex in pediatric patients. A population PK study was conducted to characterize the PK of intravenous busulfan in pediatrics and to provide dosing recommendations . Clinical studies indicated that the therapeutic window was considered to be similar for pediatric patients. However, this was confounded by the increased variability in the PK of oral busulfan seen in pediatric patients compared with adults. Hence, a target therapeutic window with a lower, more conservative threshold for toxicity, than in adults, was used for pediatric patients (900–1,350 μmol/L/min). Body weight, age, gender and body surface area were explored for their impact on pediatric dosing. Simulations suggested that the mg/kg and mg/m2 based dosing regimens were similar in their efficiency. Exposures obtained with different dosing regimens with 1 to 7 dosing steps including various combinations of weights and doses were evaluated. All the dosing regimens explored had, at best, 60 % patients achieving target exposures after the first dose. Notably, the model revealed that between-subject variability is large (25 %) while the within-subject variability is low (6 %), indicating that the BSV is the key determinant of therapeutic success. This finding coupled with the narrow therapeutic window for busulfan, supports implementation of therapeutic drug monitoring for optimizing drug therapy.

Based on the model predictions, and practical considerations, a two-step dosing regimen was proposed from this study: 1.1 mg/kg for patients weighing ≤ 12 kg and 0.8 mg/kg (adult dose) for patients weighing > 12 kg. In addition, considering that about 40 % patients may not achieve target exposures after the first dose, even with the optimized regimen, TDM was proposed to enhance therapeutic targeting. Instructions for dosing and TDM were incorporated into the drug label. This recommended dosing strategy has not been directly tested in clinical trials.

5.3 Disease Progression Model for Non-small Cell Lung Cancer

Lung cancer had the highest cancer-related death rate during the past decade, with rates surpassing that of colon, breast, and prostate cancer combined. Despite the novel efforts and large costs towards finding treatments, anticancer drugs have one of the lowest rates of successful drug development at only 5 % [44]. Even compounds reaching Phase III clinical trials have a failure rate of about 60 %. To facilitate the drug development of novel therapies for NSCLC, a tumor size (i.e., biomarker) and survival (i.e., clinical outcome) model was developed utilizing data from across a number of NSCLC trials [45]. This model can facilitate clinical screening of novel compounds and provides a tool that drug developers can use to perform clinical trial simulations to improve the design of future trials.

The goal of the pharmacometric analyses was to ascertain if there is a relationship between tumor size progression and survival in patients with NSCLC. Four drug registration trials for NSCLC containing nine different treatments were used to develop pharmaco-statistical models that link survival to baseline risk factors and changes in tumor size during treatment. The purpose of developing these models is to leverage prior quantitative knowledge to facilitate future drug development of other NSCLC regimens. Eleven risk factors were screened based on a Cox proportional hazard model. Tumor size dynamics were modeled with a mixed exponential decay (i.e., shrinkage) and linear growth (i.e., progression) model to estimate tumor sizes of individual patients over time. Survival times were described with a parametric survival model incorporating key risk factors and tumor size change as predictors. Results showed that baseline tumor size and the Eastern Cooperative Oncology Group (ECOG) score were consistent prognostic factors for survival. The mixed tumor-shrinkage/progression model was able to describe individual patient tumor size well, especially in the initial stages of treatment initiation. The overall parametric survival model included the ECOG score, baseline tumor size, and week-eight tumor size change as significant predictors for patient survival time. The survival model developed from one treatment group predicted the survival outcomes for the other eight treatment groups, despite the different mechanisms of action and the fact that they were studied in different trials. When included in the parametric model, tumor size change at the eighth week allows early assessment of activity of an experimental NSCLC regimen.

A detailed description of the model and the simulation results is included in the proceedings of the Clinical Pharmacology Advisory Committee meeting [46]. The survival model and the tumor dynamic model will be beneficial for screening early clinical development candidates, simulating NSCLC clinical trials, and optimizing trial designs. Specifically, the model can be applied to simulate pivotal trials in order to make go/no-go decision early in development, project effect sizes and dose selection.

6 Future Perspective

Throughout both the registration trial and the regulatory review stages, late-phase attrition rates in drug development are alarmingly high [1, 47]. A primary reason for this failure rate is the lack of efficient planning during the early phases of drug development. It has been shown and therefore is a belief that timely application of pharmacometric methods during the drug development and approval process can improve future development plans and reduce these attrition rates [1, 2, 5, 13, 44, 4850]. However, modeling and simulation should neither be used to substitute clinical trials altogether, nor as a tool to salvage failed trials for regulatory approval. During the initial stage of drug development, communication between the FDA and drug sponsors may help in more efficient planning of drug development. It is expected that the end-of phase 2A (EOP2A) meetings will facilitate this goal via more rational dose selection and reduction in number of cycles involved in the NDA review (FDA 2003).

In oncology, quantitative disease–drug–trial models are a valuable tool for improving future drug development. These models will be increasingly employed to design future trials using clinical trial simulations. Models can be used to perform simulations of expected survival based on tumor shrinkage, or other biomarker, for an investigational drug in early clinical studies. Refinement of these models and simulations with emerging data from new clinical studies will assist with key oncology development program decisions, including optimized dose selection and improved design of survival trials. As adequate experience is gained with a particular disease–drug–trial model suite, a standardized template can be created for the data and analysis submission for that indication. Given the limited resources, consortia on focused topics may be an effective approach toward developing such model suites. The Predictive Safety Testing Consortium (PSTC) is one such effort in this direction.

Increased partnership between the industry, academia, and the FDA is essential for the growth and wider application of pharmacometrics. In addition, increased interaction across the board between experts, such as clinicians, pharmacometricians, and statisticians, is imperative for better appreciation of this field.