Keywords

2.1 Background

2.1.1 What Is Pharmacometrics

Pharmacometrics is a bridging science that applies quantitative models representing physiology, pharmacology, and disease to describe and quantify interactions between drugs and patients (and pathogens) (Ette and Williams 2007). This involves the modeling of pharmacokinetics (PK) and pharmacodynamics (PD) with a focus on population and variability in order to characterize, explain, and predict PK and PD behaviors of therapeutic drugs. Variability may be predictable (e.g., due to differences in body weight) or apparently unpredictable (reflection of a knowledge gap).

2.1.2 Evolution of Pharmacometrics

To better understand the role of pharmacometrics in drug development, the major milestones in the growth of pharmacometrics are presented in Fig. 2.1, which reflects its evolution in utility. The journey of pharmacometrics started in the 1920s. The use of a one-compartment model to describe pharmacokinetics was introduced as early as 1924 (Widmark and Tandberg 1924); the use of a multi-compartment model incorporating biological and physiological components for the simulation of PK data was introduced by Teorell in 1937. The latter is regarded as the first physiologically based PK model (PBPK).

Fig. 2.1
figure 1

Evolution of pharmacometrics (Modified from Gobburu 2013). PK Pharmacokinetics, PK/PD pharmacokinetics/pharmacodynamics

The great growth of PK and PD, two core concepts of pharmacometrics, happened during the 1960s and 1970s (Forschungsinstitut et al. 1962; Levy 1966). For PK, the first symposium with the term pharmacokinetics (entitled as “Pharmakokinetik und Arzneimitteldosierung (Pharmacokinetics and Drug Dosage)”) was held in Borstel, Germany, in 1962 (Forschungsinstitut et al. 1962), and clinical pharmacokinetics began to be recognized in the mid-1970s (Gibaldi and Levy 1976; Wagner 1981). For PD, the concept of biophase compartment was first introduced by Segre in 1968, which was reintroduced to drug development as hypothetical effect compartment modeling by Sheiner et al. in 1979. In 1964, Ariens provided the earliest description of drugs acting through indirect mechanism, and a model describing this was provided by Nagashima et al. (1969) 5 years later. Further systematic development and applications of the models for characterizing indirect pharmacodynamic responses came from Jusko’s group later (Dayneka et al. 1993; Jusko and Ko 1994; Sharma and Jusko 1998).

An important focus in drug therapy is to understand the variability of a response among individuals in a population, which may be caused by both PK and PD. To address this, a population modeling approach was introduced by Sheiner et al. in 1972 (Sheiner et al. 1972). They proposed the use of non-linear mixed effects (NLME) regression models to analyze patient data pooled over all sampled individuals, allowing for the quantification of between-subject and within-subject variability. This analytical approach led to the development of a computer program, NONMEM. NONMEM was first released by Beal and Sheiner in 1980 (for the IBM mainframe) and became exportable for all computers in 1984 (NONMEM 77). Since then, there has been a great deal of work in population PK/PD modeling using the NLME approach, both in method and in application.

In the early 1980s, the term pharmacometrics first appeared in the Journal of Pharmacokinetics and Biopharmaceutics (Rowland and Benet 1982). Pharmacometrics was then recognized as a unique branch of quantitative science; distinct sections for pharmacometrics were created in peer-reviewed journals, and advanced PK/PD modeling trainings were offered in academia. A few professional conference series have been established in the field of pharmacometrics, such as the Population Approach Group Europe Meeting (PAGE; since 1997), American Conference of Pharmacometrics (ACoP; since 2008), World Conference of Pharmacometrics (WCoP; since 2012), and so forth.

The importance of pharmacometrics in drug development has been increasingly recognized since the conference “The Integration of Pharmacokinetic, Pharmacodynamic, and Toxicokinetic Principles in Rational Drug Development” held in 1991 (Peck et al. 1992). As such, pharmaceutical companies and regulatory agencies began to pay attention to such integration-based modeling approaches in regulatory submissions for drug approvals. From the late 1990s to early 2000s, several excellent review papers from pharmaceutical companies highlighted the value of pharmacometrics in enabling scientific and strategic decision-making in drug development (Reigner et al. 1997; Olson et al. 2000; Chien et al. 2005). Model-based findings were recognized to be able to influence high-level decisions such as trial design, drug approval, and drug labeling. Such concepts were also well accepted by academia and regulatory agencies. For example, in a white paper published in 2004, the US Food and Drug Administration (FDA) underscored the importance of a model-based drug development, where pharmacometrics is believed to play a critical role (FDA 2004). Indeed, there has been a huge increase in the application of pharmacometrics in drug development over the last two decades. A survey from 2000 to 2008 shows that the drug approval and labeling decisions of more than 60% of the submissions for new drug applications (NDA) to US FDA were influenced by pharmacometric analysis (Lee et al. 2011). Systematic compilation of pharmacometrics has also been made in a few books (Kimko and Duffull 2003; Ette and Williams 2007; Kimko and Peck 2010; Bonate 2011).

Pharmacometrics is an evolving science with continuous advancement in novel modeling approaches and applications. Recently, physiologically based PK (and PK/PD) modeling has gained much widespread applications (Jones and Rowland-Yeo 2013), predominantly driven by major technological advances and increasing confidence in this approach. Quantitative systems pharmacology (QSP) is another fast-growing area (Sorger et al. 2011; Mager and Kimko 2016). It allows for predictions of the efficacy and safety of drugs based on known or possible mechanisms of action in pharmacology, pathology, or physiology. In addition, model-based meta-analysis (MBMA) (Mandema et al. 2011) has been advocated to leverage existing big data in making rational comparative risk/benefit assessment.

With the increasing application and acceptance of pharmacometrics in drug development, several regulatory guidelines and documents concerning industry best practices have been issued. This started in the late 1990s (FDA 1999; Holford et al. 1999) and continues thereafter (FDA 2003; EMA 2008, 2016) with the most recent position paper published in March 2016 (EFPIA MID3 Workgroup 2016).

2.1.3 Role of Pharmacometrics in Drug Development

In the past two decades, pharmacometrics—PK/PD modeling and simulation (M&S)—has developed into one of the most essential tools for the integration and interpretation of large and diverse pools of preclinical and clinical data, translating them into informative knowledge. It is beneficial not only for industry professionals in informing internal decision-making at critical drug development stages (e.g., first-in-human, proof-of-concept, pivotal trial, etc.) but also for regulatory authorities in compiling and analyzing data for approval and labeling.

There is no doubt pharmacometrics is playing an increasingly important role in improving R&D investment by facilitating applications of the learn-and-confirm paradigm proposed by Sheiner (1997). As a molecule moves through the drug development process, rational models are developed and refined using accumulated data and knowledge available at each stage to predict what is likely to happen in the next stage, thereby guiding research investment. Of note the impact of pharmacometrics on decision-making and risk management is dependent on the availability and quality of data accessible at a certain stage of drug development. Framing the right questions and capturing the key assumptions are critical components of the learn-and-confirm paradigm and are essential for the delivery of high-value pharmacometric results. This is particularly important at the early stage of drug development when data are limited.

This chapter aims to provide an overview on pharmacometrics with a focus on drug model (i.e., PK and PK/PD models). Disease progression model (i.e., quantification of the relevant biological [physiological] system in the absence of drug) and trial model (e.g., characteristics of enrolled study subjects, dropout and/or compliance) will not be discussed in detail; the reader can refer to the excellent review by Gobburu and Lesko (2009) and references therein for more information.

In the following sections, commonly employed PK and PK/PD analytical approaches will be introduced, followed by the software used for analyses, analysis work flow, and major model components with a focus on the population modeling. Four case studies will be illustrated to highlight the broader applications of pharmacometrics in drug development, from the translational stage to the early and late stage in clinical development.

2.2 Pharmacometric Analysis Approaches

2.2.1 PK Models

PK, in popular terms, is often described as what the body does to the drug. PK models characterize how a drug passes through the body by using concentrations in various areas of the body as a function of time. In order to build mathematical models to describe how the concentration changes with time, the body is conveniently divided into parts, called compartments, and in each compartment, the drug is assumed to behave in the same manner. Thus, each compartment is homogeneous and is described with a single representative concentration at any point in time, i.e., well-stirred model (Pang and Rowland 1977). Each compartment can be a real physiological space in the body. For example, the compartment where the concentration is usually measured is the bloodstream, which is called central compartment. However, compartments are typically abstract concepts that do not necessarily represent particular regions of the body. Additionally, a certain PK compartment might represent different tissue types depending on the characteristics of a given molecule. For example, lipophilic compounds are more likely to distribute into adipose tissue, while hydrophilic compounds are more likely to stay in the bloodstream. Therefore, peripheral compartment (into which the compounds may distribute) may represent different tissues for these two types of compounds.

Compartments have proven to be fundamental building blocks of PK models (and PK/PD models) with the difference between models being defined by the number of compartments and the way the compartments are connected. The drug amount in a compartment can be described with parameters of rate constant parameters that are estimated preferentially as ratios of clearance (CL) and volume (V) in data fitting. The parameterization using CL and V has the advantage of interpreting an estimated parameter with physiological meaning, and it is used throughout this chapter unless otherwise indicated. Figure 2.2 presents exemplary compartment PK systems and their associated, typical concentration-time curves.

Fig. 2.2
figure 2

Compartment pharmacokinetic model examples. (a) One-compartment model with IV bolus. (b) One-compartment model with first-order absorption. (c) Two-compartment model with IV bolus. (d) Two-compartment model with first-order absorption. A1 Amount of drug at the extravascular absorption site, A2 amount of drug in central compartment, A3 amount of drug in peripheral compartment, C2 concentration of drug in central compartment, CL clearance from central compartment, F bioavailability, IV intravenous, Ka first-order absorption rate, Q inter-compartment clearance

The one-compartment system is the most basic PK model, which only includes a central compartment and an absorption compartment that are necessary for oral administration. This model is appropriate if the drug is distributed to accessible areas of the body instantly as in the case of direct intravenous bolus injection or infusion into the bloodstream (Fig. 2.2a), whereas extravascular administration requires an absorption compartment in addition to the central compartment (Fig. 2.2b).

Mathematically, for a drug administrated as a bolus dose, the rate of drug elimination that governs the drug concentration in the one compartment model can be written as

$$ dA2/ dt=- CL/V2\cdot A2, $$
(2.1)
$$ C2=A2/V2, $$
(2.2)

where A2 and C2 are the amount and concentration, respectively, of the drug in the central compartment, CL is the clearance, and V2 is the volume of distribution in the central compartment.

In the case of extravascular dosing, the rate of change in the absorption compartment can usually be described with first-order kinetics, resulting in the following differential equations:

$$ dA1/ dt=- ka\cdot A1, $$
(2.3)
$$ dA2/ dt= ka\cdot A1\cdot F- CL/V2\cdot A2, $$
(2.4)

where A1 is the amount of the drug in the absorption compartment, ka is the first-order absorption rate constant, and F denotes the bioavailability that is the fraction of the dose reaching the central compartment. Definitions of A2, CL, and V2 are the same as described above. The drug concentration in the central compartment, C2, can be estimated using the same equation as described in Eq. 2.2.

For multi-compartment models, the system typically consists of (1) a central compartment, representing the bloodstream and the rapidly equilibrated organs; (2) one or more peripheral compartments, representing more slowly equilibrating tissues; and (3) in the case of extravascular administration, an absorption compartment. The processes can also be described mathematically using differential equations. For example, the system shown in Fig. 2.2c can be described as

$$ dA2/ dt=-\kern0.5em CL/V2\cdot A2+Q/V3\cdot A3, $$
(2.5)
$$ dA3/ dt=\kern0.5em CL/V2\cdot A2-Q/V3\cdot A3. $$
(2.6)

And the system shown in Fig. 2.2d can be described as

$$ dA1/ dt=- ka\cdot A1, $$
(2.7)
$$ dA2/ dt= ka\cdot A1\cdot F- CL/V2\cdot A2+Q/V3\cdot A3, $$
(2.8)
$$ dA3/ dt=\kern0.5em CL/V2\cdot A2-Q/V3\cdot A3, $$
(2.9)

where A2 and A3 represent the amounts in the central compartment and the peripheral compartment, respectively, Q is the inter-compartment clearance, and V3 is the volume of distribution in the peripheral compartment. Definitions of A1, ka, and F are the same as described above. And the drug concentrations in central compartment (C2) and peripherical compartment (C3), respectively, can be described as

$$ C2=A2/V2, $$
(2.10)
$$ C3=A3/V3. $$
(2.11)

The following three aspects should be carefully evaluated during the development of a PK compartment model:

  • Number of compartments: During the decision-making on the number of compartments, it should be noted that too few compartments would not fit the data well, whereas too many compartments may show trivial improvement in curve fitting with poor precision parameter estimates. Typically, a multi-compartment system looks like a piecewise linear function in a plot of the concentration on a logarithmic scale versus time (Fig. 2.2c, d), while a one-compartment shows one linearly decreasing line (Fig. 2.2a, b).

  • Absorption kinetics: A first-order process is a valid starting assumption in most extravascular administration cases. However, zero-order process has also been used to describe the absorption profile (e.g., controlled release), and inclusion of a delayed absorption time and non-linearity in absorption kinetics is not uncommon. If supported by the observed data, more sophisticated model should be considered to better capture the absorption profile (e.g., parallel or sequential zero-order plus first-order absorption processes) (Zhou 2003).

  • Elimination kinetics: Many drugs are eliminated following a first-order elimination process. However, it is not unusual that some drug elimination pathways may be saturated at a high dose, which may be revealed from data collected across a wide dose range. This non-linear elimination is often described by Michaelis-Menten kinetics type of equations as described in Fig. 2.3 (Gabrielsson and Weiner 2007). For the linear elimination kinetics, there is proportionality in the drug concentration kinetic following low-dose and high-dose administrations (black and red curves in Fig. 2.3a). In the case of the non-linear capacity-limited elimination kinetics, such proportionality disappears (Fig. 2.3b).

Fig. 2.3
figure 3

Linear versus non-linear elimination kinetics. Elimination kinetic of drug from central compartment can be captured using a first-order elimination rate constant of Kel in the case of linear kinetic (Panel a) or a Michaelis-Menten kinetics type equation, Vmax × C2/(Km + C2), in the case of non-linear kinetics (Panel b). (a) One-compartment model with IV bolus and linear elimination kinetics. (b) One-compartment model with IV bolus and Michaelis-Menten-type non-linear elimination kinetics. A2 Amount of drug in central compartment, C2 concentration of drug in central compartment, Km rate to achieve 50% maximum elimination rate, Kel linear elimination rate constant, IV intravenous, Vmax maximum elimination rate

2.2.2 PK/PD Models

Contrary to PK, PD is described as what the drug does to the body. PK/PD models aim at linking dose and PK information to certain measures of biomarkers or response endpoints. The wide variety of possible efficacy and safety outcomes of a clinical trial or a preclinical experiment makes it difficult to generalize all cases of PK/PD modeling approaches with a few models. This section will focus on the most commonly used PK/PD models for continuous response variables. For discrete response variables, models often use logistic equation to convert the response to a probability that changes with a PK exposure metric.

2.2.2.1 Basic Concentration-Effect Relationships

The center of a PK/PD modeling exercise is to figure out the appropriate concentration-effect relationship. The model should describe the observed data well while reasonably reflect the mechanism of action to exert the pharmacological effect at a given concentration. Table 2.1 summarizes the four commonly employed concentration-effect models, including linear, log-linear, maximum effect (Emax), and sigmoidal Emax models (Gabrielsson and Weiner 2007). A comparison of these fundamental concentration-effect relationships is shown in Fig. 2.4. A concentration-effect model is chosen depending on the type of the pharmacological response, and the most physiologically relevant model should be used to characterize such a response, data permitting. It should be noted that “concentration” could be any metric describing drug exposure, such as area under the curve (AUC), peak concentration, trough concentration, etc. When the drug acts by the inhibition of an effect, Emax and the concentration at which E is 50% of Emax (EC50) may be referred to as Imax and IC50, where the effect (“E”—often by stimulation of a mechanism) terminology is transposed to inhibitory (“I”—the inhibition of a mechanism). The discussion below is applicable to both stimulatory and inhibitory cases.

Table 2.1 Basic concentration-effect relationship models
Fig. 2.4
figure 4

Representative basic concentration-effect relationships. Panel (a), concentration in linear scale; Panel (b), concentration in log scale. Equations associated with the representative curves are as follows: (1) linear, E = C × 1 [slope = 1]; (2) log-linear, E = log10(C) × 5 [slope = 5]; (3) simple Emax, E = 100 x C/(C + 20); (4) sigmoidal Emax, N = 2, E = 100 × C2/(C2 + 202); and (5) sigmoidal Emax, N = 0.5, E = 100 × C0.5/(C0.5 + 200.5) [Emax = 100 and EC50 = 20 for scenarios 3–5]. E Effect, C concentration, N Hill’s coefficient

The concentration-effect models shown in Table 2.1 and Fig. 2.4 assume the drug effect is zero when the drug concentration is zero. However, a more common scenario is that the effect has a baseline (e.g., pre-dose endogenous biomarker value). In this case, the observed effect during drug treatment would be a combination of drug effect (Edrug) and the baseline (Ebase). To mathematically describe this, two relationships are commonly employed—additive or proportional:

$$ E={E}_{base}+{E}_{drug}, $$
(2.12)
$$ E={E}_{base}\cdot \left(1+{E}_{drug}\right). $$
(2.13)

Both relationships should be considered during model building unless there is an established mechanistic understanding or certain prior information is known. The additive relationship (Eq. 2.12) yields the same magnitude of effect change given the same concentration regardless of the baseline measurement, whereas the proportional relationship (Eq. 2.13) indicates that a subject with a higher baseline results in a higher effect change.

2.2.2.2 Direct Effect

The concentration in the central blood compartment may be directly linked with the pharmacological effect in PK/PD modeling when the apparent biophase equilibrium is sufficiently rapid relative to the drug distribution (Fig. 2.5a) and can be described using the representative modeling scheme in Fig. 2.6a.

Fig. 2.5
figure 5

Direct versus indirect concentration-effect schemes. (a) Direct—instantaneous. (b) Indirect—delayed. Upper panel, blood concentration (C) and effect versus time of a direct effect system (upper left) and a system with delayed effect (upper right); lower panel, effect versus concentration (C) for the direct effect system (lower left) and delayed effect system (lower right). Time labels t1–t6 and the arrows in the lower right plot indicate the time sequence of the response

Fig. 2.6
figure 6

Representative pharmacokinetic/pharmacodynamic model scheme. Drug effect is described using a simple Emax model. Other basic concentration-effect models may be used when appropriate. (a) Direct response model. (b) Link/effect compartment model. (c) Indirect effect/turnover model. C Systemic concentration, Ce concentration in effect compartment, CL drug clearance, Emax maximum achievable response, EC50 concentration at which response is 50% of Emax, Ke0 first-order effect compartment rate constant, Kin zero-order rate constant for the production of the response, Kout first-order rate constant for the loss of the response

2.2.2.3 Link/Effect Compartment Model

In many situations there is a time delay between the drug concentration in blood and the drug effect. As such, the observed response is not apparently related to the blood concentration, and a hysteresis loop would present in the plot of response versus drug concentration in the blood (Fig. 2.5b).

This temporal displacement may be due to time to reach target site (i.e., drug tissue distribution), time for pharmacology to become available (e.g., active metabolite formation), or time to develop pharmacology (e.g., a slow ligand-receptor on/off time course or a cascade). In the case of distributional delay, it can be numerically handled by introducing a hypothetical effect compartment with a target site concentration representing the concentration at the site of action (Ce), which is assumed to drive the drug effect (Fig. 2.6b) (Sheiner 1997). The effect compartment (biophase) model was the first model structure that had been used to account for time delays in drug effects. Historically, it has often been applied inappropriately without recognizing that various other processes may contribute to such delays.

A first-order rate constant (ke0) is typically used to describe the time delay from the systemic concentration (Cp) to the target site concentration (Ce). In order to avoid an identifiability problem due to unknown hypothetical effect compartment volume, the same ke0 value is used as an elimination rate constant for the effect compartment. A smaller ke0 value means the effect compartment equilibrates slower and that there is more of a time lag in drug distribution to effect compartment (hence “delayed effect”). The effect compartment model can be described as

$$ \frac{d{C}_e}{dt}={k}_{e0}\cdot {C}_p-{k}_{e0}\cdot {C}_e. $$
(2.14)

2.2.2.4 Indirect Response/Turnover Model

When the time delay between drug concentration and effect is due to the time required to trigger pharmacological response(s) rather than distributional delay, the indirect response model is frequently used (Fig. 2.6c). The term, turnover model, is sometimes used interchangeably, but it is often used to describe the homeostasis-maintaining, biochemical responses in body without drug intervention. Indirect responses occur by perturbing the homeostasis (i.e., the inhibition or stimulation of the production or removal of endogenous mediators controlling the measured responses). In these cases, mechanistic delay happens even after the drug reaches the site of action.

The basic premise of the indirect models is that the observed PD effect is governed by a net balance between the rate of production and the rate of removal of an endogenous mediator that is perturbed by the drug:

$$ \frac{dR}{dt}={k}_{in}-{k}_{out}\cdot R, $$
(2.15)

where kin represents the apparent zero-order rate constant for the production of the response, kout defines the first-order rate constant for the loss of the response, and R is the response variable, which is assumed to be stationary with an initial value of R0 (kin/kout). In data fitting with the indirect response model, R is often assumed to be directly proportional to the unmeasured endogenous mediator quantity. A smaller kout value results in a longer turnover time (1/kout), i.e., longer time for a new steady state to be established following a change of the turnover system.

A drug can affect the net turnover system by the four mechanisms shown in Table 2.2 (Dayneka et al. 1993; Sharma and Jusko 1998). If production rate (kin) increases or removal rate (kout) decreases, then the value of the response variable increases, and vice versa.

Table 2.2 Overview of indirect response/turnover models

Both the effect compartment model and the indirect response model may fit a given dataset well, but it is highly recommended that the model selection should rely on mechanistic understanding of whether the time delay is more due to a biophase equilibration (effect compartment model) or the need to develop a response (indirect response model). These processes might be concurrent, but rarely is there enough information in the observed data to support a combination of the two. Several other PK/PD models for continuous responses have been developed to better describe the data observed in certain scenarios such as the transit model, the tolerance and rebound model, and so forth. Readers can refer to the review by Csajka and Verotta (2006) for more information.

2.2.3 Emerging Approaches

Over the last several years, great progress has been made in pharmacometric analysis. New methodologies have been applied in drug development such as the physiologically based pharmacokinetic (PBPK) model, quantitative systems pharmacology (QSP), and model-based meta-analysis (MBMA).

2.2.3.1 Physiologically Based PK Modeling

The concept of physiologically based pharmacokinetic (PBPK) modeling is not new, as reviewed in Sect. 2.1.2. However, until recently, the application of PBPK models in drug development had been limited, mainly due to the computational complexity of the models and the necessity to collect various physiological or pharmacological data as inputs for the models. Currently, PBPK modeling is being used throughout drug discovery and development as a tool in assessing drug-drug interaction, designing appropriate formulations, and predicting PK in special populations such as pediatrics. The increased application in PBPK has been mainly facilitated by (1) the increased understanding of systems biology, pharmacology, genetics, and genomics, (2) the advancement of computation technology, and (3) the availability of commercial platforms for PBPK modeling, such as Simcyp simulator (Certara, Sheffield, UK), GastroPlus (Simulations Plus, California, USA) and PK-Sim (Bayer Technology Services, Leverkusen, Germany), and so forth.

Compared to data-driven empirical top-down modeling, PBPK model strives to be mechanistic by mathematically transcribing anatomical, physiological, physical, and chemical descriptions of the complex processes governing the fate of the drug in the body (i.e., bottom-up). It consists of multi-compartments corresponding to different tissues in the body and connected by the circulating blood system, where each compartment is defined by a volume (or weight) and blood flow specific to a given tissue. The parameters that describe the body system are refereed as system-related input parameters. Typically, compartments in PBPK models comprise of major tissues of the body such as adipose, muscle, skin, brain, gut, heart, liver, kidney, lung, spleen, etc. However, sometimes a reduced model is developed, by grouping organs or tissues with similar blood perfusion rate and lipid content, in order to reduce the overall complexity of the model.

To simulate the fate of a drug substance in the body, additional drug-specific input parameters are required, e.g., physicochemical characteristics determining drug permeability through membranes, partitioning to tissues, binding to plasma proteins and affinities to certain metabolizing enzymes or transporters, etc. Combined with study design input parameters (e.g., dose and dosage regimen, food intake, etc.), a PBPK model is expected to provide a more mechanistic insight of the various factors influencing PK (e.g., non-linearity) and thus a better understanding of the behaviors of a drug molecule in situations or populations that have not yet been investigated clinically. Its utility becomes more significant in the cases where it is difficult or not ethical to conduct clinical trials, such as trials with neonates or subjects with renal failure.

2.2.3.2 Quantitative Systems Pharmacology

Quantitative systems pharmacology, covering both efficacy and safety of a drug, has been introduced recently as a novel approach to link drug responses to physiological processes operating at molecular, cellular, tissue and organ levels. Compared to traditional PK/PD models, which focus on confidence in pharmacologic agent, QSP emphasizes confidence in target (Vicini and van der Graaf 2013). QSP models represent hybrid, multi-scale structures reflecting an understanding of fundamental mechanistic relationships of endogenous biomarkers affected by a pharmacologic agent and focusing on the dynamic interplay among the constituents of a system that manifests as emergent responses. As a result, QSP models allow for the exploration of questions in the absence of direct information because models (1) are based on best known understanding of physiology/pharmacology, (2) incorporate assumptions about drug and disease mechanisms, and (3) address drug action in the context of disease. For example, QSP models can be used to predict exposure-response or dose-response relationships for hypothesis generation. It can also be used to predict clinical outcomes with a new regimen and/or in a new population/indication. Moreover, it can test hypotheses when unexpected biological/pharmacological results are observed (Crawford 2016).

Although conceptually appealing due to its realistic representation of the body, the actual development and implementation of QSP models is often viewed as a practically difficult task. In particular, due to its complexity, QSP model development may not easily match timelines to address questions for fast-paced drug development. QSP models used to be developed in academia over decades, typically with a single lab focusing on a specific pathway of interest. To be of value in drug development, models must be developed rapidly and efficiently, because the time scale is being measured in weeks/months not years. It is, therefore, suggested to start with a fit-for-purpose, smaller model and then to build on it over time. This would avoid a long developmental period requiring a lot of effort with limited immediate impact, and long-term investment of QSP models is being increasingly appreciated in pharmaceutical industry. It is considered that QSP models, once developed, can be rapidly extended within their physiological and pharmacological fields in order to provide inferences on questions that empirical models have limited capability to answer, such as population selection, biomarker response explanation, off-target effect prediction, combinational therapy optimization, and so forth.

2.2.3.3 Model-Based Meta-analysis

Model-based meta-analysis (MBMA) has been recognized as an innovative data enrichment strategy to make efficient use of internal and external data sources, resulting in increased knowledge and more precise decision-making in drug development (Mandema et al. 2011). This strategy involves a systematic search and tabulation of summarized results from public and confidential clinical studies, followed by a regression analysis that may attribute variability in the study results to the differences in the study population or trial conduct. It provides a quantitative framework that leverages valuable existing data into the decision-making process for a drug candidate—e.g., a quantitative understanding of how a new compound will perform relative to existing standard of care and/or compounds in development.

To better understand the utility of MBMA, it is worth mentioning the key assumptions of MBMA. First, it assumes all drugs with a similar mechanism of action share the similar onset of action and a common maximum effect. Secondly, true (relative) treatment effects are considered to be “exchangeable” across studies, i.e., given the same treatments, each study would estimate the same, exchangeable treatment effects. Thirdly, heterogeneity, due to, for example, random unexplained study-to-study difference, can be accounted for by study-specific random effect models. An inadequate fulfillment of these assumptions would lead to bias during result interpretation.

Like any meta-analysis, MBMA requires a careful review of literature and an assessment of potential publication bias. Readers can refer to the standard procedures described in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (http://www.prismastatement.org).

2.3 Pharmacometric Analysis Methodology

This section describes modeling aspects related to non-linear mixed effect population modeling, one of the most widely applied analysis tools in drug development. The book by Bonate (2011) details the methodology.

2.3.1 Software

From a historical perspective, it is noticeable that the development of personal computers together with software combining differential equation solvers with non-linear minimization routines accelerated the progress made in pharmacometrics. NONMEM (Beal and Sheiner 1984) was the first software available for population modeling. After its first release in the early 1980s, continuous improvements were implemented with the advancement of statistical and estimation techniques. There has been a series of updates (http://www.iconplc.com/innovation/nonmem/). At the same time, several other pharmacostatistical programs have been developed to handle NLME PK/PD modeling such as WinBUGS (http://winbugs-development.mrc-bsu.cam.ac.uk/), Monolix (http://lixoft.com/products/monolix/), Phoenix NLME (https://www.certara.com/software/pkpd-modeling-and-simulation/phoenix-nlme/), nlme and nlmixr packages in R (https://www.r-project.org/), proc nlmixed in SAS (https://www.sas.com/), and many more. It is important that the software used for population analysis should be adequately validated and maintained. User-written model codes, subroutines, and scripts are usually provided for review as part of the regulatory submission.

2.3.2 Pharmacometric Analysis Work Flow

A schematic work flow for the development of a population PK or PK/PD model is provided in Fig. 2.7. The key associated activities are briefly described below.

Fig. 2.7
figure 7

Schematic work flow for population model development

  • Data handling: Before any modeling analysis can take place, some form of data handling will be required to assemble the data into the appropriate format. It is advised to obtain access to data as early as possible so that data quality can be assured before analysis. This is extremely helpful for time-sensitive analyses.

  • Exploratory analyses: Analysis is initiated with various exploratory analyses to better understand the data and potential difficulties associated with the modeling. Also, some aspects of data handling (e.g., imputation of missing data) may depend on the results of exploratory analyses. Both graphical and numerical methods (such as tabulation) may be applied.

  • Structural model: The first modeling step pertains to structural model development (see Sect. 2.3.3.1), which looks for the optimal structure form for the fixed effects. The choice of a structure model will be largely driven by prior knowledge and results from exploratory analyses. Random effects need to be included in order to allow the refinement of the structure model.

  • Stochastic random effect model: The next step is random effect model development (see Sect. 2.3.3.2). Both between-subject and within-subject random effect models are optimized in this step. In order to precisely estimate the fixed effects in a model, the random effects have to be properly accounted for.

  • Covariate model: The base model developed from the above procedures is then used for covariate model development, if applicable. After selection of the final covariate model, the appropriateness of the random effect model should be reassessed. For example, a parameter describing between-subject variability should be reduced upon inclusion of a covariate effect. The resulting model is then considered as the final model.

  • Model evaluation and qualification: The final model should be subjected to purpose-driven, model qualification. Model qualification should provide a sufficiently complete characterization of the model’s behavior to better understand its potential limitations in future use.

  • Simulation: In many instances the last component of a pharmacometric analysis is simulation, for example, to predict the responses to treatment regimens that have not yet been tested. Variability and parameter uncertainty should be carefully considered depending on the purpose of the simulation (see Sect. 2.3.5).

  • Reporting and communication: Appropriate reporting and communication of the modeling results is essential for any pharmacometric analysis in support of drug development. This is of particular importance for analyses intended for regulatory review.

2.3.3 Model Development

Population models are comprised of several components: the structural model, statistical model, and covariate model (Mould and Upton 2012; Mould and Upton 2013; Upton and Mould 2014). Development of each component will be discussed below.

2.3.3.1 Structural Model

The structural model describes the relationships between the PK and PD variables for a typical individual. It can be represented as algebraic functions or differential equations.

For example, the simple one-compartment PK model for a drug being administrated as a IV bolus dose can be described using the algebraic function below:

$$ C(t)=\frac{Dose}{V}\cdot {e}^{-\frac{CL}{V}\cdot t} $$
(2.16)

where C(t) is the systemic concentration at time t. Dose, CL (clearance), and V (volume of distribution in the central compartment) are parameters to define the relationship between C(t) and time. Alternatively, this one-compartment PK model can be written as a differential equation, as already discussed in Sect. 2.2.1 (Eqs. 2.1 and 2.2).

Compared to an algebraic function, which states the relationship between a dependent variable (i.e., systemic concentration) and an independent variable (i.e., time) explicitly, a differential equation describes the rate of change of a variable, e.g., the rate of change in concentration with respect to time. Many complex pharmacometric systems cannot be stated as algebraic functions, and instead they are comprised of differential equations using a set of typical parameter estimates (i.e., fixed effects) to quantify the mass transfer between compartments.

Identification of an appropriate structural model is the first step in population model development. It is usually guided by the exploratory analyses to determine the key characteristics of the structure model, for example:

Pharmacokinetics

  • Is there an absorption lag time?

  • Does the absorption follow a first-order or zero-order process, or some combination thereof?

  • How many apparent distribution/elimination phases are there?

  • Are there linear or non-linear PK processes (in the absorption or elimination phases) with respect to dose and/or over time?

Pharmacodynamics

  • Is there a time delay between drug exposure and response?

  • Are the data following a linear or non-linear PK/PD relationship?

  • Is there an endogenous baseline component?

  • Is there a placebo response over time?

  • Is there a diurnal variation (circadian rhythm)?

Only plausible models should be tested. For instance, if the concentration versus time plot shows two distinct distribution/elimination phases, then a one-compartment model that describes a monophasic decreasing profile is not appropriate. For the mechanistic or semi-mechanistic PK/PD models, all tested models should be in agreement with the mechanisms of action of the drug.

Once a structural model provides an adequate goodness-of-fit, it is recommended to evaluate additional models with reduced complexity to avoid over-parameterization. If the omission of complexity does not result in a significant deterioration of model performance, then the reduced model should be chosen as the final model according to the principle of parsimony.

2.3.3.2 Statistical Model

The statistical models describe random effects, i.e., variability around the structural model parameters. There are two primary sources of variability in population modeling: between-subject variability (BSV), which is the variance of a parameter across individuals, and the residual unexplained variability (RUV), which is the unexplained variability, after controlling for other sources of variability, including within-subject variability (WSV), measurement error, etc. In some situations, between-occasion variability (BOV) is also employed to describe variability among multiple different measurement periods within a single subject. Population models usually have fixed effects as well as random effect parameters and are therefore called mixed effect models.

Figure 2.8 depicts the concepts associated with different levels of variability. Considering a drug being administrated as a single bolus dose, its typical or population mean concentration-time profile can be described using Eq. 2.16 (blue solid line in Fig. 2.8). However, the PK parameters for this model such as CL and V are not constant across all individuals, i.e., each individual subject has his/her unique CL and V values, as described by Eq. 2.17:

$$ {P}_i={\theta}_{pop}\cdot {e}^{\eta_i}, $$
(2.17)

where P is the parameters of interests such as CL and V, i is the ith subject, θpop is the estimate of the population mean parameter value, and ηi is the deviation from the population mean for the ith subject under the assumption of η ~ N (0, ωi2). A log-normal function is used in Eq. 2.17 to describe the distribution of η values because CL and V, like most PK parameters, must be positive and are often right-skewed. An individual concentration-time profile (red dashed line in Fig. 2.8) is described with subject-specific values of CLi and Vi using the same one-compartment structural model (Eq. 2.16).

Fig. 2.8
figure 8

Source of random effects. BSV Between-subject variability, RUV residual unexplained variability, ηi deviation from the population mean for the ith subject, εij residual errors in the jth observed concentration for the ith subject

Concentration measurements are affected by errors such as assay error, dosing history error, sampling time error, etc. As such, population models include RUV, which reflects the difference between the observed data and the model predictions, to capture these errors as well as WSV, model misspecification-related errors, and other unexplained variability. Such residual variability for drug concentrations can be assumed to follow a normal distribution and described by

$$ {Y}_{ij}={C}_{ij}\ast \left(1+{\varepsilon}_{ij}\right), $$
(2.18)

where Yij is the jth observed concentration for the ith subject, Cij is the corresponding model predicted concentration, and εij are the residual errors under the assumption of ε~N (0, σ2).

In addition to the proportional residual variability model shown in Eq. 2.18, additive or combinational proportional and additive residual variability models are also often used to describe RUV.

Parameterization of the statistical model is usually based on data range being evaluated and the properties of the variables being analyzed. For example, an exponential BSV model is often used for estimation of the concentration to achieve the half maximum effect (EC50) in order to avoid a negative value, which is not plausible for concentration measures. In contrast, BSV of some baseline clinical outcomes may be captured using an additive BSV model to ensure a normal distribution. Developing an appropriate statistical model is important for covariate evaluations (see Sect. 2.3.3.3) because covariates identification may be confounded by the BSV and RUV parameterizations used in the model to evaluate covariates. Moreover, results from simulation (see Sect. 2.3.5) depend highly on the choice of the statistical model. It is recommended that statistical models should be evaluated early during structural model development and monitored for adequacy throughout model development via multiple diagnostic tools as presented in Sect. 2.3.4.

2.3.3.3 Covariate Model

Covariates are defined as individual subject-specific factors including demographics (such as weight, age, and gender), baseline disease characteristics, laboratory measures (such as hepatic functional enzyme), co-medications, comorbidities, and life styles (such as smoking) that might affect the PK and/or PD characteristics of a drug. The identification of covariates which explain variability is an important objective in population modeling analysis. During drug development, questions such as “How much does drug exposure vary with covariates such as body weight?” can be answered through a dedicated clinical trial via enrolling subjects who have low and high body weights. However, such information can also be opportunistically assessed though the population modeling analysis of studies that are not specifically planned to answer such a question. Population modeling allows for the investigation of relationships between covariates (e.g., weight) and parameters of interests (e.g., clearance) as fixed effects. BSV estimates in models involving covariates would reduce when compared to estimates from the base model.

Figure 2.9 outlines the common approach used for covariate model development. In general, good structural and statistical error models are prerequisites for covariate analysis; when a covariate is known to have a large influence on the model parameters, as seen with body weight for pediatric subjects, this covariate can be included in the structural model (a structure covariate). Scientific and clinical relevance should guide the selection of potential covariates to be tested. Because run times can sometimes be extensive with multiple covariates and multiple base model parameters of interests, it is often necessary to limit the number of covariates evaluated in the model. Screening of covariate correlations is typically conducted to reduce the number of covariates to be tested so that covariate effects can be reliably estimated from the analysis dataset.

Fig. 2.9
figure 9

Covariate model development

Several well-recognized covariate search methods (automated or manual) are available (Hutmacher and Kowalski 2015). All covariates can be tested stepwise (one at a time; forward selection and/or backward elimination) on all model parameters. Alternatively, covariates can be tested on select parameters, based on physiological plausibility, e.g., creatinine clearance on clearance (one of the most important parameters for exposure determination) for renally excreted drug, body weight on clearance and volume for pediatric subjects based on principles of allomeric scaling. The inclusion or exclusion of a covariate is usually determined via the likelihood ratio test with a pre-specified significance level (e.g., forward addition at the 5% significance level and backward elimination at the 1% level). Moreover, the addition or deletion of a covariate should be based on clinical and pharmacological experience and overall model goodness-of-fit. The models built via stepwise procedures can suffer from selection bias due to the order of covariates evaluated, and thus other covariate search approaches may be considered, e.g., a full model with all potential covariates incorporated followed by backward elimination.

A covariate can be included in the model in several ways, e.g., multiplicatively as a power model (continuous covariates; Eq. 2.19) or as a conditional effect relative to groups (categorical covariates; Eq. 2.20). Other covariate parameterizations can also be applied such as exponential and linear functions when appropriate.

$$ \mathrm{Continuous}:\kern0.5em {P}_j={\theta}_0\cdot {\left(\frac{X_{ij}}{M\left({X}_j\right)}\right)}^{\theta_j}, $$
(2.19)
$$ \mathrm{Categorical}:{P}_j={\theta}_0\cdot {\theta_i}^{X_{ij}}. $$
(2.20)

where Pj is the jth population estimate of parameters, Xij is the covariate of subject i for the parameter Pj, M(Xj) is the median (or standard reference) of covariate X for the population, θ0 is the typical value of the parameter Pj, and θi is a constant that reflects the covariate’s effect on the parameter.

2.3.4 Model Evaluation and Qualification

The objective of model evaluation and qualification is to examine whether the model is a good description of the dataset in terms of its behavior and the application proposed (fit-for-purpose). Model selection in general should be based on a combination of model performance, mechanistic plausibility, and clinical relevance (FDA 1999). And for model performance, three basic sets of criteria are used, model stability (e.g., bootstrap, condition number, etc.), predictive performance (e.g., visual prediction check [VPC]), and model sensitivity (e.g., influence of outlier, key assumption, etc.).

There are many diagnostics tools developed for model selection, and further details can be found in a review by Karlsson and Savic (2007). Some commonly used tools include statistical significance criteria (e.g., change in objective function value, Akaike information criterion, etc.), precisions in parameter estimates, diagnostic plots, and so forth. A VPC, stratified by significant covariates included in the final model, is usually recommended to be conducted to ensure that the models perform adequately well across various subgroups including different dose groups, disease populations, age and weight groups, etc. Alternatively, prediction corrected VPC (pcVPC) can be generated to check model performance. Ideally, the final model should satisfy the following criteria:

  • Provides a good description of the observed data

  • Has no apparent trend in the relevant goodness-of-fit diagnostics

  • Is consistent with existing clinical/pharmacological/physiological knowledge

  • Has a successful termination of the covariance step and/or is adequately stable

The parsimony principle should be applied to guide model selection whenever applicable. There may be reasons to select a more complex model instead of a parsimonious one despite lack of statistical significance. For instance, although mechanistic models require a high level of complexity, they take into account physiological processes and/or a mechanism of action to yield more plausible simulation results from the models.

2.3.5 Simulation

The quality of the simulation depends on the quality of the model in addition to prior information on drug attributes and the system. Simulations are most often used to explore the implications of a population model under differing assumptions and/or circumstances compared to those used for model building—i.e., beyond the scope of existing data. In these instances, it is important to define and disclose all assumptions for model building and simulation and to perform a sensitivity check to investigate the impact of the assumptions. Simulation results should be interpreted with a clear understanding of the limitations and assumptions inherent in the model.

2.3.5.1 Stochastic Simulation

Stochastic simulations using a mixed effect (fixed effects and random effects) population model are more complex than non-stochastic simulations using a model with fixed effects only. Random effect parameters describe variability in the population of interest and must be accounted for in simulations when understanding population variability is important for decision-making, e.g., designing future studies. This is implemented using a random number generator (available in most modeling software) to sample parameter values from a distribution. Simulations are repeatedly executed so that the distributions of the simulated output can be summarized (e.g., median and 5th and 95th percentiles). A common “rule of thumb” is that at least 200 simulation replicates are needed when summarizing data as mean values and at least 1000 to include their associated confidence intervals (Mould and Upton 2012).

2.3.5.2 Parameter Uncertainty

The precision of parameter estimates is limited by current knowledge and the input dataset. Parameter uncertainty is crucial to consider when running simulations. Large uncertainty in general indicates low model robustness and thus low confidence in model projection. By excluding parameter uncertainty, while including only between-subject-variability estimates, the simulation results underestimate the overall range in model prediction.

2.3.5.3 Simulation Approach Selection

Questions to consider in the determination of a simulation approach include:

  • For model evaluation or inference?

  • For interpolation (within the boundary of the original dataset) or extrapolation (outside of the boundary of the original dataset)?

  • Interested in population mean or individual responses with associated variability?

  • Time constraint for simulation analysis?

Depending on the objective of a simulation study, variability and uncertainty of parameter estimates may or may not be included, and Table 2.3 summarizes common simulation scenarios with their respective application conditions.

Table 2.3 Simulation scenarios

2.4 Case Studies

Modeling and simulation tools have been widely applied to address critical questions in drug development in an efficient way. This section highlights such applications via four case studies where modeling and simulations have been used:

  1. 1.

    To identify a translational PK target to guide early clinical development (via MBMA)

  2. 2.

    To optimize dosing regimens for an efficacious and safe dose-finding trial design (via population PK/PD modeling)

  3. 3.

    To support regulatory approval and to alleviate the need for additional clinical trial (via population PK/PD modeling)

  4. 4.

    To guide early pediatric formulation development by determination of doses to be investigated in pediatrics (via PBPK)

In each example, the critical question to be answered will be introduced, followed by a pharmacometric analysis work flow, key outcomes, and the impacts on decision-making.

2.4.1 Translational Development

To guide the development of a novel anti-HIV-1 (human immunodeficiency virus type 1) agent, particularly the early clinical development, it is critical to have a good understanding of the target drug exposure that must be achieved clinically to ensure a robust long-term viral suppression (i.e., PK target). Using an MBMA approach, class-specific exposure-response models were developed for anti-HIV drugs of non-nucleoside reverse transcriptase inhibitors (NNRTIs) and integrase strand transfer inhibitors (INSTIs). These models linked viral load kinetics to potency-normalized steady-state trough plasma concentrations by retrospectively analyzing pooled internal/external data from the early-phase short-term monotherapy trials (phase 1b) and the in vitro potency measurements (Xu et al. 2016). Given the apparent existence of a class-specific common exposure-response relationship within a given drug class, this modeling approach was considered reasonable. Simulations were then conducted using these models to simulate viral load inhibition following doses that have a long-term efficacy based on literature reports. This led to the determination of steady-state trough concentrations of 6.2 and 2.2-fold above the in vitro potency for NNRTIs and INSTIs, respectively, as the PK targets (Fig. 2.10).

Fig. 2.10
figure 10

Identification of pharmacokinetic targets for HIV-1 antiretrovirals by characterizing class-specific exposure-response via model-based meta-analysis (Modified from Xu et al. 2016). ER exposure-response, PK pharmacokinetics, MBMA model-based meta-analysis, NNRTI non-nucleoside reverse transcriptase inhibitor, Ctrough trough concentration, QD once daily, BID twice daily

The models developed and the PK targets selected were used to guide compound selection during preclinical development, and to informe proof-of-concept trial design in subjects with HIV-1 for new antiretroviral agents. The models provided a reasonable prediction of the dose-response relationship for an investigational anti-HIV agent and enabled the comparisons against existing and emerging treatment options. This MBMA modeling approach serves as an effective path forward to increase the probability of success in achieving conclusive trial outcomes.

2.4.2 Early Clinical Development

Guselkumab is a human IgG1 monoclonal antibody (mAb) in clinical development, which specifically blocks human interleukin (IL)-23. Understanding the exposure-response relationship of guselkumab is important to guide dose selection for phase 2 study in patients with moderate-to-severe psoriasis. However, at the time of designing a phase 2 study, limited information was available from patients; population PK/PD modeling and simulation were used to bridge the data gap in order to select the optimal phase 2 doses (Hu et al. 2014).

A population semi-mechanistic exposure-response modeling of guselkumab was conducted to evaluate the association of guselkumab exposure with Psoriasis Area and Severity Index (PASI) scores using a Type I indirect response model with an empirically modeled placebo effect (Fig. 2.11). A model was initially developed based on a small dataset from a phase 1 study of 47 healthy subjects and a phase 2 study of 24 patients with psoriasis who received various doses of guselkumab. A natural consequence of limited patient efficacy data was substantially high uncertainty in the resulting model. The simulated PASI75 (a 75% improvement in PASI score from the baseline) response rates were considered higher than expected, based on prior experience with ustekinumab, another human mAb with a similar mechanism of action (IL-12 and IL-23 blockage).

Fig. 2.11
figure 11

Phase 2 dose selection of guselkumab in patients with psoriasis by utilizing multiple data sources via population PK/PD modeling and simulation (Modified from Hu et al. 2014). Gul Guselkumab, HV healthy volunteers, P1 phase 1, P2b phase 2b, P3 phase 3, PASI75 75% improvement in PASI score from the baseline, PSO psoriasis, q8w every 8 weeks, Ust ustekinumab

Upon feedback from the clinical team, the model was subsequently updated, first by incorporating data from psoriasis patients who received a placebo (n = 765) and from patients actively treated with ustekinumab (n = 1230) in two ustekinumab phase 3 trials. The similarity in mechanisms of actions between guselkumab and ustekinumab suggests that they may share similar exposure-response model parameters, except for IC50 (concentration to reach 50% maximum inhibition), where an additional parameter RC50, defined as the ratio of IC50 of ustekinumab over that of guselkumab, was included. Inclusion of additional ustekinumab data and the consequent adjustment to the specific model parameter substantially reduced uncertainties in all model parameters. Simulations of various scenarios were then conducted using the final model to select optimal doses and regimens. Sensitivity analyses were also performed with a few tentative RC50 values to evaluate the impact on the regimen decision.

Upon discussion with the clinical team, the final dose regimens were recommended for the phase 2b study. After the completion of the phase 2b study, the data observed were consistent with the model prediction, and an optimal phase 3 dose was then selected for further development.

2.4.3 Late Clinical Development

Canagliflozin is a sodium-glucose transporter-2 (SGLT2) inhibitor approved for the treatment of type 2 diabetes with the recommended dose being 100 or 300 mg once daily (QD). For patients requiring a combined treatment with metformin and canagliflozin, the use of a fixed-dose combination (FDC) tablet may improve patient convenience and compliance. Because metformin immediate release is typically administered twice daily (BID) for patients with type 2 diabetes, the FDC was developed to be BID where the canagliflozin was divided into 50 and 150 mg to provide the same currently approved total daily dose (100 and 300 mg).

Glycated hemoglobin (HbA1c) lowering (the efficacy measures for diabetes therapy) with 100 and 300 mg QD doses (not FDC) in two clinical studies was seen to be somewhat greater than those with 50 and 150 mg BID doses in a third study; however, there were differences in some factors between the studies that would limit the utility of such cross-study comparisons (most notably, baseline HbA1c was lower in the BID than in the QD studies). To address this, a population PK/PD analysis (de Winter et al. 2016) was conducted to provide a robust model-based solution that could account for differences in study populations. In the absence of directly comparable long-term study results, the analysis was used to assess whether there were clinically significant differences in the efficacy between QD and BID regimens of canagliflozin at the same total daily dose.

An established population PK model was used to predict full, 24 h PK profiles from measured trough concentrations. The PK/PD model was then developed using pooled data from all three aforementioned studies, incorporating an Emax-type model relationship between 24 h canagliflozin exposure and HbA1c-lowering efficacy (Fig. 2.12). Internal and external model validation demonstrated that the model adequately predicted HbA1c lowering for canagliflozin QD and BID regimens. Simulations using the final PK/PD model demonstrated the absence of clinically meaningful between-regimen differences in efficacy. This result supported the regulatory approval of a canagliflozin-metformin FDC tablet and eliminated the need for an additional clinical study.

Fig. 2.12
figure 12

Support of regulatory approval of canagliflozin-metformin fixed-dose combination via population PK/PD modeling and simulation (Modified from de Winter et al. 2016). QD Once daily, BID twice daily, TDD total daily dose

2.4.4 Pediatric Development

CPD-1, a non-nucleoside reverse transcriptase inhibitor for the treatment of HIV-1 infection, was in phase 2 development. To support pediatric formulation development, it is important to understand the dose levels likely to be investigated in children. A PBPK modeling approach was undertaken (Xu et al. 2014) to provide a computational framework for predicting PK exposure in children to guide dose selection. Such PBPK model incorporated the physicochemical properties of the drug, the system, and the known enzyme maturation trajectories.

An adult PBPK model was initially constructed in Simcyp® and translated to pediatrics by applying complex built-in maturation functions including drug-related (e.g., enzyme ontogeny etc.) and system-related physiological parameters (e.g., organ size, blood flow, etc.). CPD-1 is primarily eliminated by hepatic metabolism mediated by CYP3A4 and a 100% CYP3A4 metabolism was assumed to facilitate PK scaling from adults to children. The pediatric PBPK model was used to simulate exposures in virtual pediatric populations to guide dose selection in children: the proposed dose levels must provide steady-state exposures that are sufficient for efficacy (i.e., steady-state trough concentration  ≥ 0.88 μM) but within the safety exposure limit (i.e., steady-state AUC [area under the curve] during 24 h dosing interval < 66.5 μM·h). Based on model based dose projections in children, a suspension formulation was selected for younger children <6 years old, while scored tablets that allow flexibility in dosing (i.e., split dose) based on the adult formulation were selected for children ≥6 years old (Fig. 2.13).

Fig. 2.13
figure 13

Decision-making in pediatric formulation development via physiologically based PK modeling (Modified from Xu et al. 2014). AUC0–24h Area under the curve during 24 h dosing interval, C24h,SS steady-state trough concentration at 24 h, QD once daily, yr year

It is anticipated these formulation/dose combinations should provide dosing flexibility in children while streamlining the supply chain of CPD-1, shortening its development timeline, and reducing the costs associated with potential dosing waste.

2.5 Concluding Remark

Pharmacometrics is an evolving, multifaceted science pertaining to pharmacokinetic and pharmacodynamic modeling and simulation. Its value in quantitative integration of available information in drug development has been increasingly appreciated. This is beneficial to inform decision-making at critical drug development stages such as first-in-human, proof-of-concept, and pivotal trials, along with drug approval and labeling. Indeed, pharmacometrics is considered a valuable tool for the improvement of R&D return on investment.

The use of pharmacometrics in drug development requires adequate resources and experts with sufficient training. Continuous education is critical with the emergence of new modeling approaches. A good pharmacometrician must see his/her role extending beyond developing models: a model, complex or simple, should translate data into knowledge about a drug candidate and support quantitative decision-making. A pharmacometrician’s job is to influence drug development decisions, not to simply develop models.