Keywords

5.1 Introduction

The FDA Process Validation Guidance (2011) advocates a life cycle approach for manufacturing to ensure the process can reliably and consistently provide quality product that meets the therapy’s desired efficacy and safety profile. This life cycle approach emphasizes collection and evaluation of appropriate data as evidence to demonstrate that the process is in a controlled state to deliver quality product. It has three stages:

  1. 1.

    Process design ,

  2. 2.

    Process qualification, and

  3. 3.

    Continued process verification (CPV).

Chapter 3 discussed the process design stage in which state-of-the-art science and engineering are used to design a process, statistical tools including design of experiment are used to identify sources of variation, and risk assessment is used to establish a control strategy for critical parameters and attributes. A quality by design (QbD ) approach is desired to build quality into the process.

Chapter 4 discussed process qualification , which included two substages:

  1. 1.

    Facility, equipment, and systems qualification and

  2. 2.

    Process performance qualification (PPQ ).

Substage 1 ensures all facilities, equipment and systems meet cGMP and other regulatory standards and that they are fit for the purpose of a reliable and controlled process to deliver quality product. Substage 2 (PPQ ) confirms that the process performs as expected.

The third stage of the FDA process validation considered in this chapter is called continued process verification (CPV). In this stage, data are continuously collected and evaluated to verify the process remains in the desired controlled state. Using the analogy of an orbital spacecraft, process validation Stages 1 and 2 represent the development of an orbital spacecraft and the successful launch into orbit. Stage 3 consists of the work done to ensure the spacecraft remains in orbit. This chapter discusses the key components of CPV and statistical tools that are useful for this purpose.

5.2 Components in Continued Process Verification

It is assumed that Stages 1 and 2 of the validation process provide a good understanding of the manufacturing process and associated analytical methods. Using risk assessment , quality by design , and appropriate quality systems (ICH Q8 (R2), ICH Q9, ICH Q10 ), an appropriate control strategy will be in place, and the process capability to deliver intended-for-use products will have been validated.

Alsmeyer and Pazhayattil (2014) described a simple case study of CPV for small molecules. The BioPhorum Operations Group (BPOG 2014) provides a position paper on CPV with a case study using a monoclonal antibody manufacturing process. This study is a continuation of a case study in bioprocess development using risk assessment and quality by design (CMC Biotech Working Group 2009). These two case studies provide examples for the following discussion.

Figure 5.1 presents a simplified diagram of key parameters from different sources that potentially need monitoring in Stage 3.

Fig. 5.1
figure 1

Manufacturing components and structure of CPV monitoring variables

The parameters in Fig. 5.1 are classified into four sets:

  1. 1.

    Critical material attributes ,

  2. 2.

    Critical process parameters ,

  3. 3.

    Critical quality attributes , and

  4. 4.

    Critical method attributes .

Because of the progression shown in Fig. 5.1, critical quality attributes are often called lagging factors, and critical material attributes and critical process parameters are called leading factors (Strickland and Altan 2016).

These categories may not be complete or non-overlapping, but are useful to help determine the quantities that need to be monitored in Stage 3. Modifications of this classification system can be considered for particular circumstances. This structure is useful for understanding the sources of variation as emphasized in the FDA validation guidance.

  1. 1.

    Critical material attributes (CMaAs ) are properties or characteristics of input materials used during the manufacturing process. For example, in a typical solid dose manufacturing process, critical characteristics of API and excipients (e.g., water content) can be considered critical material attributes (Alsmeyer and Pazhayattil 2014). In a monoclonal antibody manufacturing process, examples of CMaAs include certain characteristics of working cells, key nutrient levels of the cell culture, and glucose feed levels (BPOG 2014).

  2. 2.

    Critical process parameters (CPPs ) are those that relate to the manufacturing process and directly impact product quality. CPPs should be identified in Stages 1 and 2 of process qualification , along with their control range (i.e., design space). These parameters can appear in different unit operations for small molecules (Fig. 2 of Alsmeyer and Pazhayattil 2014) or in different manufacturing steps for monoclonal antibodies (Chap. 10 of BPOG 2014).

  3. 3.

    Critical quality attributes (CQAs ) are properties and characteristics of the drug substance or final product. CQAs must meet specifications in order to ensure that the product quality supports its intended safety and efficacy for patients. Typical CQAs relate to product strength, potency, identity, and purity. CQAs for one unit operation may become CMaAs for a subsequent unit operation.

  4. 4.

    Critical method attributes (CMeAs ), including critical reagent properties, method parameters, and method accuracy and precision measures, are all candidates for continual monitoring in order to ensure the analytical methods remain fit for purpose. These are often neglected, but much of the data used to assess the CMaAs and CQAs are output from analytical methods. If the analytical methods are not validated and controlled for their intended accuracy and precision, measured CMaAs and CQAs will be compromised. In addition to minimizing the risk of poor measurements, this information is useful in troubleshooting non-conformances of a CQA to determine if the problem is attributable to the manufacturing process or the analytical method.

According to quality by design principles, effective control of CMaAs and CPPs should lead to high confidence that requirements on CQAs will be met. Therefore, the FDA guidance on validation emphasizes sufficient understanding of the process, and states that “Focusing exclusively on qualification efforts without also understanding the manufacturing process and associated variations may not lead to adequate assurance of quality.” By monitoring parameters in all four categories, the CPV monitoring program provides evidence that the process is sufficiently understood.

The structure presented in Fig. 5.1 implies the following:

  1. 1.

    If all CQAs are in control, the product quality is in control.

  2. 2.

    If all CMaAs and CPPs are in control, there is a high chance that CQAs will be in control.

  3. 3.

    If all CMeAs are in control, it follows that all data represent the true state of the product and process.

Data for the identified CMaAs , CPPs , CQAs , and CMeAs should be compiled in a format amenable for trending and analysis using available software. The reader is referred to Chap. 10 of BPOG (2014) for a well-structured list of variables to be monitored in each step of a monoclonal antibody drug substance manufacturing process.

5.2.1 Data Collection and Control Limits

Historical data from the four parameter groups discussed above are used for constructing statistical control limits. Once sufficient process understanding is achieved, control limits based on historical process performance are not expected to require revision unless the process has been changed or impacted in some defined manner (e.g., an investigation determines a process shift has occurred). Periodic examination of the appropriateness of the limits may be undertaken based on the frequency of manufacturing.

A sampling plan should be determined for each monitored variable. The plan should include sampling frequency and the type of chart(s) used for trending. An analysis plan should also be created, including the process for constructing control limits, the frequency of analysis, how results will be interpreted, and actions to be taken after a trend or out-of-control event is identified.

New data are best entered into the historical database in a timely manner for trending and analysis so that any potential signals may be investigated in a meaningful manner. Examining data with very low frequency limits the usefulness of the CPV program because it reduces the ability to react to potential factors that may lead to out-of-specification results or excursions from in-process control limits.

5.2.2 Monitoring

Consistent with the 2011 FDA guidance , the goal of CPV is “continual assurance that the process remains in a state of control (the validated state) during commercial manufacture.” This goal ensures that high quality products can be consistently supplied to patients. As such, an effective CPV program monitors the chosen parameters for trends and defines actions to be taken if signals are identified.

If the CPV program identifies a signal, any of the critical material, process, quality, or method attributes may require further examination. The investigation type and rigor will depend on the specific attribute that displays the signal, the scientific knowledge about the given parameter and process, an examination of previous investigations, and an analysis of the current data set. One should not assume that the signal is a result of inappropriately set limits and blindly reset control limits so that the process appears in control. However, an examination of the appropriateness of the limits may be considered should the data indicate a need for such an assessment.

In cases where the monitoring program detects a signal, the implications may differ depending on which of the following two cases occurs:

  1. 1.

    One or more CQAs are trending out of control.

    1. a.

      One should first examine the analytical method performance. If the method appears out of control, a thorough investigation into the method should be undertaken. Determine if the correct CMeAs are being monitored, if they are being monitored with the appropriate frequency, and if the method is fit for purpose. If the investigation finds that the analytical method is out of control, it should be improved, samples should be re-tested using the updated method, and the quality attribute can then be re-assessed.

    2. b.

      If all CMeAs are in control, the out-of-control signal for the CQA is confirmed. The signal is attributed to some portion of the manufacturing process. It is now necessary to examine CMaAs and CPPs .

    3. c.

      If one or more of the CMaAs or CPPs are out of control, the process should be re-calibrated. The investigation may indicate the process is not well understood, and more study of the process is warranted. The resulting investigation may lead to a new control and monitoring strategy, possibly including new monitoring variables or control limits.

  2. 2.

    All CQAs are within control, but out-of-control signals occur for other attributes.

    1. a.

      If some CMaAs or CPPs are out of control, this implies the upstream parameters may not truly impact the CQA. One should investigate whether the out-of-control variable should continue to be monitored using a risk assessment .

    2. b.

      If some CMeAs are out of control, there are two possibilities:

      1. i.

        The product quality is consistent, but the method aberration is not large enough to change the method performance or severely affect the quality attribute data.

      2. ii.

        The CQAs are within control only because of a serious method aberration. In fact, the CQAs may be out of control, but data distortion due to the method produces a misleading result. In either case, the method and its control strategy should be reviewed. Perhaps an adjustment of the control strategy solves the problem, or a partial validation or re-validation of the method is warranted. In this situation, product samples may need to be re-tested using a calibrated method.

5.3 Statistical Tools for CPV

Statistical quality tools are used to verify that CQAs are being properly controlled throughout CPV. Statistical control charts , process capability assessment, and acceptance sampling methodology are among the statistical quality applications used to achieve the process monitoring and improvement required in CPV. This section introduces some of these statistical applications applied to CPV. ASTM (2010) provides useful material for further reading.

5.3.1 Acceptance Sampling

Acceptance sampling plans can be incorporated into the overall strategy of CPV for ensuring product quality. The majority of acceptance sampling plans involve attribute sampling, or variables described as qualitative or nominal. However, in many cases quality attributes are physical measurements on a continuous or quantitative scale. In such cases lot acceptance is based on the percentage of individual values in a lot that satisfy a numerical specification.

Acceptance sampling consists of a sampling design and a set of rules for making decisions based on the resulting sample. For situations where only a single sample is selected, the two decisions are

  1. 1.

    Accept the lot or

  2. 2.

    Reject the lot.

  3. 3.

    In a pre-planned multiple sample design, a third decision is to select another sample and then decide to either accept the lot, reject the lot, or continue sampling.

The fundamental tool for selecting a sampling plan is the operating characteristic (OC) curve . An OC curve is a bi-variate graph with probability of passing a lot on the vertical axis and the percentage of units that do not meet the specification limits on the horizontal axis. Figure 5.2 provides an example of an OC curve for an attribute sampling plan in which a sample of 80 items is selected at random from a lot. A lot is “accepted” if there are fewer than two non-conforming (defective) units in the sample. The lot is “rejected” if there are two or more non-conforming units in the sample. The terms “accepted” and “rejected” in this context are used in a generic sense. The action that results from either conclusion depends on the particular application.

Fig. 5.2
figure 2

OC curve with sample size = 80 and acceptance number = 1

From Fig. 5.2 is seen that this plan virtually ensures no lot is accepted if the percentage of defective units in the lot exceeds 8%. If the percentage of defective units is 2%, the probability of accepting the lot based on this sampling plan is 52.3%.

When deciding whether to accept or reject a lot, there are two types of errors

  1. 1.

    Rejecting a good lot (Type 1 error) and

  2. 2.

    Accepting a bad lot (Type 2 error).

The risk of committing a Type 1 error is referred to as the producer's risk and is denoted by the Greek letter α. The risk of committing a Type 2 error is called the consumer's risk and is denoted by the Greek letter β. Definitions for “good” and “bad” are typically defined in terms of the percentage of non-conforming (defective) units in the sample. Acceptance quality level (AQL ) is the percentage of defective units on the horizontal axis associated with the 95% probability of acceptance on the vertical axis. The lot tolerance percent defective (LTPD ) is the percent of defective on the horizontal axis associated with the 10% probability of acceptance. It is also useful to define AQL and LTPD in terms of the proportions \( {p}_1=\frac{\mathrm{AQL}}{100\%} \) and \( {p}_2=\frac{\mathrm{LTPD}}{100\%} \). The AQL and LTPD values for Fig. 5.2 are shown in Fig. 5.3.

Fig. 5.3
figure 3

AQL and LTPD for Fig. 5.2

Determination of acceptable values for AQL and LTPD require assessment of a variety of criteria including risks, costs, and consumer requirements. The first step in this process is to classify the severity of the defects that might occur. Typical classifications are Critical, Major, and Minor. Defectives of the same category would generally be expected to have the same values for AQL and LTPD .

To demonstrate how acceptance sampling can be used in CPV, consider a quality attribute monitored in Stage 3 to ensure it maintains the same quality level attained in Stages 1 and 2. Values will conform to an acceptable quality level if they do not exceed an upper specification limit (USL).

A random sample of size n is selected and the quality attribute is measured for each item. The sample mean is then compared to the quantity

$$ A= USL- kS $$
(5.1)

where S is the sample standard deviation and k is a constant that is a function of AQL and LTPD . The process is considered acceptable if the sample mean of the n items is less than or equal to A. Schilling and Neubauer (2009, p. 186) provide the following approximate formulas for both k and n using what is called the k-method:

$$ \begin{array}{c} k=\frac{Z_{1-{p}_2}{Z}_{1-\alpha}+{Z}_{1-{p}_1}{Z}_{1-\beta}}{Z_{1-\alpha}+{Z}_{1-\beta}}\\ {} n={\left(\frac{Z_{1-\alpha}+{Z}_{1-\beta}}{Z_{1-{p}_1}-{Z}_{1-{p}_2}}\right)}^2\mathrm{when}\ \mathrm{the}\ \mathrm{variance}\ \mathrm{is}\ \mathrm{known}\\ {} n={\left(\frac{Z_{1-\alpha}+{Z}_{1-\beta}}{Z_{1-{p}_1}-{Z}_{1-{p}_2}}\right)}^2\left(1+\frac{k^2}{2}\right)\mathrm{when}\ \mathrm{the}\ \mathrm{variance}\ \mathrm{is}\ \mathrm{unknown}\end{array} $$
(5.2)

where Z δ is the percentile of a standard normal distribution with area δ to the left.

To illustrate how acceptance sampling may be applied to Stage 3, consider a power fill process. Suppose that during Stages 1 and 2, the net weight of each vial should be at least 25 g. To determine whether the process maintains this quality level, an acceptance sampling plan requires that the process should be accepted 95% of the time when the proportion of net weight vials below 25 g is AQL  = 0.5%, and should be rejected 90% of the time when the proportion of net weight vials less than 25 g is LTPD  = 5%. The specification above can be expressed using acceptance sampling terminology as determining a sampling plan with \( {p}_1=\frac{\mathrm{AQL}}{100\%}=0.005 \), \( {p}_2=\frac{\mathrm{LTPD}}{100\%}=0.05 \), α = 0.05, and β = 0.10. Using these values and assuming that the variance of the process is unknown, \( {Z}_{1-{p}_1}=2.576 \), \( {Z}_{1-{p}_2}=1.645 \), Z 1 − α  = 1.645, and Z 1 − β  = 1.282, k = 2.05 and n = 31(rounding up). If the variance is known, the sample size reduces to n = 10 (rounding up).

Schilling and Neubauer (2009) and Burdick and Ye (2016) provide more in-depth discussions of acceptance sampling . Kiermeier (2008) provides R-code for many of the required calculations.

5.3.2 Statistical Control Charts

Statistical control charts are useful for continually verifying that a process remains in control. The main goal of statistical control charting is to use probability theory to determine whether an observed deviation is due to a chance cause (also known as a common cause) or to an assignable cause. If a control chart signals the occurrence of an assignable cause, the process is stopped and appropriate actions are taken to eliminate the assignable cause. In addition, preventive actions are put in place to reduce the chance that the assignable cause reappears in the future. One set of rules generally used to determine when an assignable cause occurs is provided by Nelson (1984).

To briefly demonstrate this process, we present results for an individual value chart. An individual value chart is used in Stage 3 to monitor individual values of CQAs for released lots. Suppose that a CQA used for lot disposition is monitored in an individual control chart. A sample of n lots is selected and a single CQA measurement is taken from each lot. The collected sample is represented as Y 1 , Y 2 ,  …  , Y n . For the procedure that follows, it is assumed that when the process is in control, the sample of n lots behaves as a random sample selected from a normal population with mean μ and standard deviation σ. Based on the probabilities of the normal distribution, the probability that a single observation exceeds the range from μ − 3σ to μ + 3σ is roughly 99.73%. The first rule presented by Nelson (1984) states than an individual value that falls outside this range is a signal that the process is out of control. In practice, μ and σ are unknown and must be estimated from the sample. An unbiased estimator for the unknown process mean μ is the sample average,

$$ \overline{Y}=\frac{{\displaystyle \sum_{i=1}^n{Y}_i}}{n}. $$
(5.3)

An estimate of σ using a moving range of two consecutive measurements in the sample is

$$ \begin{array}{c}\frac{\overline{MR}}{1.128}\ \\ {}\overline{MR}=\frac{{\displaystyle \sum_{i=1}^{n-1}\left|{Y}_{i+1}-{Y}_i\right|}}{n-1}\end{array} $$
(5.4)

An individual control chart is established by plotting a run chart for the sample values, with horizontal reference lines at \( \overline{Y} \) to represent the center line (CL),

$$ \begin{array}{l} LCL=\overline{Y}-3\times \frac{\overline{MR}}{1.128}\\ {} LCL=\overline{Y}-2.66\times \overline{MR}\end{array} $$
(5.5)

to represent the lower control limit (LCL), and

$$ UCL=\overline{Y}+2.66\times \overline{MR} $$
(5.6)

to represent the upper control limit (UCL). Figure 5.4 presents an example of an individual value chart.

Fig. 5.4
figure 4

Individual value control chart

A moving range chart as shown in Fig. 5.5 is useful to complement the information provided by the individual control chart. A moving range chart has a horizontal reference line at \( \overline{MR} \) and an upper control limit (UCL) of \( UCL=3.267\times \overline{MR}. \)

Fig. 5.5
figure 5

Moving range control chart

Figures 5.4 and 5.5 were obtained from a sample of n = 26 released lots of a manufacturing process. The measured CQA is concentration expressed as percentage of label claim. A summary of the calculated results are shown in Table 5.1.

Table 5.1 Computations for control charts

A graphical inspection of the plots indicates that none of the individual values or moving ranges are outside their respective control limits. This suggests that the process is in a state of statistical control.

Since decisions based on control charts are based on probability, there is a risk that a future individual value will fall outside the control limits, even though the process is truely in control. Similarly, there is a chance that a future individual value will fall within the control limits, even if an assignable cause is present. The consequences of such errors can be severe, and need to be considered in establishing a risk strategy. These risks can be dramatically increased if one were to use either two or four standard deviation control limits. Two standard deviation limits would result in nuisance signals, whereas four standard deviation limits would fail to detect shifts due to an assignable cause. The three standard deviation control limits described above provide a good balance between these two risks.

Many other types of control charts are used throughout Stages 1–3 of the validation process, and there is a wealth of references on this topic. Interested readers are referred to ASTM (2010), Montgomery (2013), Wheeler (2012), ASTM E2587 (2016) and Altan et al. (2016).

5.3.3 Process Capability and Performance Assessment

When a manufacturing process is in statistical control, this does not necessarily imply that it is producing products that meet predetermined quality specifications. Therefore, it is not only important to evaluate process stability (statistical control) during CPV, but it is also equally important to monitor the process capability (i.e., the ability to produce products that conform to specifications). Monitoring process capability often provides potential focal points for process improvement. Additionally, it can be used to assess any improvements to process capability after process improvements have been implemented. Process capability indices identify the need to reduce common cause variation or to compare processes.

When a process is in statistical control, its quality is predictable. Thus, before assessing process capability , it is necessary that the process be in a state of statistical control. It is common to define process capability in units of standard deviations of the controlled process. We will denote this process standard deviation as σ. In particular, it is common to look at the relationship between the standard deviation and the range between the upper and lower specification limits. This capability index is defined as

$$ {C}_p=\frac{USL- LSL}{6\sigma} $$
(5.7)

where LSL is the lower specification limit and USL is the upper specification limit.

When process data are well represented with a normal distribution and the process is centered between the upper and lower specification limits (i.e., the process mean =(LSL + USL)/2), the capability index C p can be expressed as the number of units that are outside of specification. In particular, the proportion of defective product (expressed in parts per million (ppm)) is related to C p by the equation

$$ ppm\ \mathrm{defective}=1,000,000\times 2\Phi \left(-3{C}_p\right) $$
(5.8)

where Φ is the cumulative standard normal distribution. For example, assume that the process is centered about the specification limits and that C p  = 1. Then

$$ \begin{array}{c} ppm\ \mathrm{defective}=1,000,000\times 2\Phi \left(-3\times 1\right)\\ {}\kern5em =1,000,000\times 2\times \Phi \left(-3\right)\\ {}\kern5.5em =1,000,000\times 2\times 0.00135\\ {}=2,700\ \mathrm{ppm}.\end{array} $$
(5.9)

Thus, a C p  = 1 corresponds to a centered process that produces 2700 ppm outside of the specification limits. This relationship between C p and ppm can be used to establish acceptable values for C p . Since C p  < 1 implies that more than 2700 ppm will be out of the specification limits, and C p  > 1 implies less than 2700 ppm out of the specification limits, it is seen that the process improves as C p increases. In addition to describing the overall capability of a process, C p can be used to determine where to focus process improvement efforts.

The capability index C p is not appropriate when a process is operating off-center. In such cases, an alternative capability index is defined as

$$ {C}_{pk}= \min \left[\frac{USL-\mu}{3\sigma},\frac{\mu - LSL}{3\sigma}\right] $$
(5.10)

where μ is the (off-centered) process mean.

Because μ and σ are typically unknown, they must be estimated from a sample. There are several ways to estimate σ, and this unfortunately has created much confusion as to what is the “correct” manner. As in most statistical procedures, the “correct” manner depends on the situation. We demonstrate one approach, but encourage the reader to read more on this topic in the references provided at the end of this section.

Because the control chart in Sect. 5.3.2 indicates the process is stable, we consider the sample to be a random sample of n = 26 from a process with mean μ and standard deviation σ. Thus we will use the sample mean \( \overline{Y} \) as an estimator for μ and the sample standard deviation S as an estimator for σ. A point estimator and 100(1–α)% lower confidence bound on C p using these estimators is

$$ \begin{array}{c}{\widehat{C}}_p=\frac{USL- LSL}{6\times S}\\ {}L={\widehat{C}}_p\times \sqrt{\frac{\chi_{\alpha :n-1}^2}{n-1}}\end{array} $$
(5.11)

where

$$ \begin{array}{c}S=\sqrt{\frac{{\displaystyle \sum_{i=1}^n{\left({Y}_i-\overline{Y}\right)}^2}}{n-1}}\\ {}\overline{Y}={\displaystyle \sum_{i=1}^n{Y}_i}\end{array} $$
(5.12)

and \( {\chi}_{\alpha : n-1}^2 \) is the chi-squared percentile with n − 1 degrees of freedom and area α to the left. The lower bound on C p can be used to answer the question, “What is the smallest value of C p consistent with the uncertainty in the data?” A process is considered capable if L is greater than the desired value for C p .

A point estimator and 100(1 − α)% lower confidence bound on C pk is

$$ \begin{array}{c}{\widehat{C}}_{pk}= \min \left[\frac{USL-\overline{Y}}{3 S},\frac{\overline{Y}- LSL}{3 S}\right]\\ {} L={\widehat{C}}_{pk}\left[1-{Z}_{1-\alpha}\sqrt{\frac{1}{9 n{\widehat{C}}_{pk}^2}+\frac{1}{2\left( n-1\right)}}\right]\end{array} $$
(5.13)

where Z 1 − α is the percentile of a standard normal distribution with area 1–α to the left.

Assume that the specification limits are LSL = 95% and USL = 105%. Calculations for (5.11) and (5.13) are provided in Table 5.2 using the data from Sect. 5.3.2 where \( \overline{Y}=100.10 \) and S = 1.40.

Table 5.2 Computations for capability indexes

As will always be the case, C pk is less than C p . Some interpret C p to be the maximum attainable capability that is achieved when the process is centered. Using equation (5.8) and the lower bound of C p provides the estimated out-of-specification rate of

$$ \begin{array}{c} ppm\ \mathrm{defective}=1,000,000\times 2\Phi \left(-3\times 0.91\right)\\ {}\kern5em =1,000,000\times 2\times \Phi \left(-2.73\right)\\ {}\kern3.6em =1,000,000\times 2\times 0.0032\\ {}\kern-1.9em =6,400\ ppm.\end{array} $$
(5.14)

A process performance index is closely related to a process capability index. These are typically represented as P p and P pk . A capability index is typically used in a prospective assessment where a process has been demonstrated to be in statistical control. Such an assessement focuses on the ability of the process to meet specifications in the future. A process performance index is used in a retrospective assessment to examine past process behavior and determine how a process will perform in the future if left unchanged. The process under examination may or may not be in statistical control. Some authors differentiate a capability index from a performance index by the manner in which the standard deviation is computed. The capability index employs an estimate of short-term variance, and the performance index employs an estimate of long term variance. More information on the topics considered in this section are provided in Altan et al. (2016), ASTM E2281 (2015) and Montgomery (2013).

5.3.4 Out of Specification and Corrective and Preventative Action (CAPA)

The goal of the CPV program is to detect a process shift before an out-of-specification result is observed. Typically, an out-of-specification result leads to a rigorous investigation, and may ultimately lead to rejection of the batch. Results that do not meet specifications may be observed for unit operations where the CPV program has previously detected signals or where the monitored attribute has displayed less than ideal process capability . However, out-of-specification results may also be obtained for parameters where the CPV program has not previously detected any concerns.

An overall examination of the CPV strategy should be part of any investigation into out-of-specification results. The level of scrutiny given to the CPV monitoring for the given parameter will depend on previous investigations and corrective actions already in place. If the CPV program has previously detected an issue with a given parameter, then the monitoring program is likely functioning properly, and investigative efforts might focus on the effectiveness of previous corrective actions. In contrast, if the CPV program has not previously detected any potential issues, an examination into whether the current monitoring strategy is effective should be undertaken.

When a non-conformity occurs, the following steps are required to investigate and take actions for correction.

  1. 1.

    The magnitude and scope of its risk should be assessed. If there is minimum risk, perhaps no further action is needed. Otherwise, a root cause analysis should be conducted to identify the assignable cause and a solution should be identified.

  2. 2.

    Corrective and preventive actions are taken to eliminate the root cause of the non-comformity and prevent its future occurence.

  3. 3.

    The attribute associated with the non-conformance must be closely monitored to verify that it is now consistently in control and in specification.

This process is demonstrated in the following example. Suppose the potency of a batch of biological product exceeds the upper specification limit. Since potency is a critical quality attribute , the risk of this excursion non-conformity is high. Such a non-conformance could potentially lead to safety problems for patients. Accordingly, a root cause analysis is conducted using the process described in Sect. 5.2.2. The performance of the analytical method for potency is first examined. Assume there is an upward trending in the potency of the negative control. This suggests there was a change in the reference standard. Further investigation leads to the discovery that the shelf-life of the reference standard has been extended twice. To determine if this was the root cause, a new reference standard was qualified and compared to the original reference standard. The comparability analysis showed that the method performance was highly similarly to the method performance when using the previous reference standard before the shelf-life extension. Another few samples from the same batch of the biological produt were tested using the new reference standard and the results were all within specification (corrective action). From this analysis, it was concluded that the excursion was due to method drift. A new process was established to monitor the stability of the reference standard (preventive action). If no aspects of the analytical method had been discovered to be the root cause, a further drill down to the manufacturing process would have been required.

Regulatory agencies expect companies to verify that changes made in response to a CAPA actually work to eliminate the root cause of a non-conformance failure. To do this, it is required to examine data collected after the CAPA and demonstrate that the failure rate intended to be improved by the CAPA satisfies the desired goal. Typically a protocol is drafted that states a post-change sample must satisfy some test criterion related to an upper value for the new defective rate. Burdick and Ye (2016) provide an example of such an application.

5.4 A CPV Protocol and Relation to Annual Product Review

Although CPV is a Stage 3 process validation activity, it should be kept in mind during Stage 1 when collected process knowledge will inform which control points should be monitored and incorporated into the CPV program. The parameters to be monitored under the CPV plan should be largely defined and understood prior to Stage 2 so that important data for generating the Stage 3 CPV limits can be gathered. The CPV protocol should be a living document throughout the first two stages of process validation. When sufficient data are collected to reliably estimate the expected process variability, the CPV protocol should be modified and assessed at some frequency as defined in procedures.

As knowledge of the process accumulates during development and validation, the CPV protocol is updated regarding the variables to be monitored, their sampling plan, monitoring chart type, and control limits.

A good CPV protocol should minimally include the following information:

  1. 1.

    Product information.

  2. 2.

    Personnel, roles, and responsibilities. A designated statistician or someone trained in statistical techniques should be involved throughout the product life cycle.

  3. 3.

    A structured table for all monitored parameters categorized into CMaAs , CPPs , CQAs , CMeAs , and variables corresponding to each attribute. The table should also include the sampling plan, control chart type, and initial limits. Specify which attributes should be monitored with a particular frequency.

  4. 4.

    A description of the process for periodic examination of the appropriateness of the limits and the method for adjusting limits based on updated process knowledge.

  5. 5.

    Identification of the database warehouse and analysis software.

  6. 6.

    All relevant data and knowledge (e.g., design space) accumulated from Stages 1 and 2 should be organized and included for determination of initial control limits.

  7. 7.

    Description of planned analyses, including frequency of analyses, format of documentation, and result evaluation.

  8. 8.

    An appropriate action plan should be established to address aberrant results. Procedures should clearly define what kinds of aberrant results can be handled by designated personnel, and what results require escalation to upper management.

  9. 9.

    A plan for change management should be defined. Over the life cycle of the product, some aspects of the monitoring plan may need to be changed or updated due to an accumulation of experiences and process knowledge, or in response to regulatory requirements .

The CPV protocol should align with the PPQ protocol created in Stage 2. The CPV protocol will specify a frequency for analysis of given parameters, but data should be assessed annually, at a minimum. Since an annual product review (APR) is required for several regulatory jurisdictions, coordinating the annual CPV reporting cycle with the APR cycle is most efficient from an analysis perspective. The CPV protocol should meet the minimum data analysis requirements for the APR. The APR is also a good time to evaluate the performance of the CPV protocol.

5.5 Statistical Support

The FDA guidance on validation defines process validation as “the collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality products.” This definition characterizes process validation as a joint work between scientists and statisticians, and requires a full integration of statistical involvement throughout the process.

The word “statistical” or “statistics” appears 12 times in the guidance, which highlights the importance of quantitative data analysis methods in the CPV program. Regarding CPV at Stage 3, the guidance specifically emphasizes, “We recommend that a statistician or person with adequate training in statistical process control techniques develop the data collection plan and statistical methods and procedures used in measuring and evaluating process stability and process capability .” Additionally, it states “We recommend that the manufacturer use quantitative, statistical methods whenever appropriate and feasible.” We strongly recommend that adequate statistical resources are made available for process validation and that statisticians be an integral part of the team throughout all three stages of process validation.