Keywords

1.1 Introduction

Continuous manufacturing has been receiving increasing attention in the pharmaceutical industry driven by the expectation of achieving reduced operating and capital costs, improved product quality, and increased reliability (Lee et al. 2015). While this mode of manufacture is new to the pharmaceutical industry, it is widely practiced in many industry sectors, such as refining and petrochemical, bulk chemical, and food and minerals processing. It most commonly involves the processing of fluids, liquid or gases, although particulate and granular materials and suspensions are also handled. In these industries, the continuous manufacturing plant or line is usually dedicated to a specific product and is typically operated without interruption around the clock with only infrequent shutdown to perform maintenance functions or in case of emergency. A continuous manufacturing line is normally designed for a nominal production rate and while that rate can be reduced within a limited range, typically further reductions lead to unsatisfactory product outputs or damage to equipment. Generally, continuous manufacturing facilities enjoy economies of scale; that is, the investment and operating cost per unit of production decrease as the plant design capacity is increased.

The incentives for continuous manufacturing in the pharmaceutical industry are not the same in all aspects as they may be for the other industry sectors and, thus, it is important to understand what the essential elements of the continuous manufacturing mode are, and which aspects are really introduced to adapt to the needs of a specific industry sector. Thus, in this chapter, we will briefly discuss the characteristics of continuous manufacturing at the conceptual level, first, in its generic form, viewing the process as a unitary system, and then as a system composed of multiple manufacturing unit operations. Next, we will review the key requirements for implementing an effective continuous process and discuss some aspects that are specific to pharmaceutical applications. Lastly, we will conclude with a discussion of the advantages and limitations of continuous manufacturing and contrast those to the advantages and limitations of the batch operating mode, which has been the mainstay of the pharmaceutical industry.

1.2 General Characteristics of Continuous Processes

In this section, we will briefly review some of the basic concepts of relevance to the continuous mode of operating a pharmaceutical process. These concepts include: steady state, dynamic, and batch process; state of control; start-up and shutdown; nonconforming materials; batch/lot; residence time and residence time distribution; process time constant and gain.

Continuous manufacturing is a mode of operation in which the manufacturing system receives continuous inputs of mass and energy, transforms those inputs via a specific sequence of chemical and physical operations without interruption and produces continuous outputs of mass and energy. The ideal continuous plant is an open system that operates at steady state, that is, the input and output flows are constant over time and, thus, any accumulation of mass or energy in the system is likewise constant over time. By way of example, Fig. 1.1 depicts a continuous mixer in which components A and B are blended and the blend continuously withdrawn. To avoid overflow or depletion, the output flow must be chosen so as to keep the accumulation of material in the tank constant.

Fig. 1.1
figure 1

Simple stirred tank mixing vessel

Unfortunately, in practice no real system is truly at steady state, rather input flows, external environmental conditions, and internal manufacturing parameters are continually subject to disturbances and thus fluctuate, in turn, causing deviations in the output flows, compositions, and possibly other properties. The continuous system is thus inherently in a dynamic state. If the disturbances are sufficiently small, then the process itself may dampen these disturbances sufficiently so that the deviations in the outputs are acceptable. In general, one cannot rely on the process to exhibit such stability and then the challenge is to equip the system with control strategies that will confine the fluctuations within limits such that product quality specifications are satisfied. If the process is operated so that all the important properties of the output streams can be confined within those acceptable limits, then the process is said to be in a state of control (CDER 2019). The process in Fig. 1.2 has disturbances in its input, but by virtue of a control logic the output is kept in state of control. That control logic may be passive (tank overflow outlet) or active (suitable manipulation of an outlet valve).

Fig. 1.2
figure 2

Process in state of control

Any continuous process is thus inherently a dynamic process that is maintained in a state of control as a result of active intervention, typically by a suitably designed automation system. In general, a process can be in a state of control even if is operated in a cyclic or periodic fashion as long as the properties of the output are maintained within acceptable deviation limits around the nominal or “golden” periodic profile. With all continuous processes there are two situations in which large departures from a state of control can be expected: during start-up and shutdown. During start-up, a continuous process will be brought from an idle and empty state with no inputs and outputs to its nominal production rate, again in controlled fashion. The output material generated during start-up normally does not meet quality specifications, is said to be nonconforming, and must be rejected. Likewise, during shutdown, the process is brought to a rest state with no input or outputs and in general at least a portion some of the output during that transition period will fail to meet quality specifications. Generally, the determination of safe and nonconforming material sparing start-up and shutdown strategies is an essential part of developing any continuous process control strategy.

By way of contrast, a pure batch operation involves charging the process unit with a specific amount of input, processing of that input while the system is closed and then removal of all of the output material at some point in time. During the period of time when processing occurs, the operation is in a dynamic state, with changing conditions within the unit. If the operation is carried out with fixed operating recipe and conditions, then any deviations in the input or any deviations from the operating recipe will translate to deviations in the output. In order to maintain quality specifications of the output, suitable changes in some of the recipe parameters must be undertaken. In practice, there are variations on the pure batch mode that occur. For instance, a batch operation can be fed-batch, that is, during the course of processing additional input is provided to the system, or semi-batch, that is, during processing some output component is removed over time, or both. A typical example of the former is a batch reactor in which one of the reactants is charged to the reactor and the other is fed to the reactor at some flow rate that may change over time so as to keep temperature rise within limits. An example of semi-batch is a batch centrifuge in which the cake is retained while the filtrate is removed as it is generated. One of the key differences between batch and continuous operations is that in the former, start and stop of the operation of the unit and the material transfers occur at discrete points in time, while in the latter, all inputs and outputs and processing occur continuously over time.

Another key difference between the batch and continuous modes is that in the batch case, the discrete amount of material produced inherently provides a convenient way of establishing an identity for the material produced. That discrete amount of material, called the batch, serves as a means of documenting and tracking material produced under FDA regulations. In the continuous case, the definition of batch or lot can be flexible and still satisfy the regulations. Thus, it can be defined in terms of a quantity of material processed, or based on a period of production time and can be flexible depending on the length of time, providing it is over a period of time during which conforming material was produced (CDER 2019).

By virtue of the fact that a batch operation is charged at a point in time, processing occurs over a defined period in time, and then all of the material discharged at a fixed point in time, all of the material in the batch spends the same amount of time in the unit. That length of time is called the residence time . When the process is continuous, it is not necessarily the case that all material flowing through the process will actually have the same residence time. In the ideal case of continuous flow of fluid through a pipe of uniform diameter, if the flow is ideal, that is, it involves no wall friction and no mixing in the axial direction, then the residence time of every element of fluid is the same and will be equal to the length of the pipe divided by the fluid velocity or L/u, as shown in Fig. 1.3.

Fig. 1.3
figure 3

Residence time for ideal flow in pipe

However, in the more realistic case in which mixing in the axial direction does occur and there is a wall friction effect creating a boundary layer, then the velocity will take on a parabolic shape with maximum value at the center line of the pipe. Some elements of the fluid (e.g., those along the pipe wall) will have a longer residence time in the pipe than others (e.g., those near the center line), as shown in Fig. 1.4.

Fig. 1.4
figure 4

Residence time in real flow in pipe

This difference in residence time can be experimentally observed by conducting a tracer study that simply involves injecting at the inlet a small amount of a dye or other measurable component, which does not materially change the flow, and then measuring the concentration of dye at the exit over time. The residence time distribution (RTD) is simply a function of time, which indicates the fraction of fluid particles that experience a given residence time (Shinnar 1986). It can be computed from the dye concentration measurement normalized by the amount of dye injected, or

$$ \mathrm{RTD}=E(t)=C(t)/\underset{0}{\overset{\infty }{\int }}C(t) $$

A very common unit operation is that of a stirred tank with continuous input and output. If the tank is perfectly mixed, then every fluid element entering the tank will be instantly mixed and have an equal probability of leaving the tank. It can be shown that the residence time distribution function, E(t), for this ideal continuous operation is an exponential function parameterized by the mean residence time, which consists of the ratio of the volume of the vessel V divided by the (steady state) volumetric flow rate into the vessel, q, or θ = V/q. Specifically, it is given by

$$ E(t)=\left(1/\theta \right)\exp \left(-t/\theta \right) $$
$$ E(0)=1/\theta $$

As shown in Fig. 1.5, in real stirred tank vessels there will be imperfections in the mixing performance, including a delay before an entering element of fluid enters the main mixing zone, bypassing of fluid elements to the output and stagnant zones where some fluid will be held back and thus the RTD will be distorted with some spread in shape. This will cause the mean residence time as well as the variance of the residence time to possibly be larger than the ideal. Moreover, in general the residence time distribution can also be affected by operating variables such as the impeller rpm.

Fig. 1.5
figure 5

RTD of continuous stirred tank

The residence time distribution has implications in terms of the definition of a lot or batch since it makes the boundary between adjacent lots less distinct—the adjacent lots will share material with a similar history. Of course, this boundary effect as a fraction of the material constituting the entire lot diminishes with increasing lot size. The residence time also has implications with regard to tracking nonconforming material since if at the input to the unit a deviation in the material properties occurs that lies outside of the specifications, then that nonconforming material will appear at the outlet of the unit, delayed by the mean residence time. Thus, it is only from that time point on that the output material needs to be rejected. Moreover, when the input returns to be within specification, the rejection of output material can be stopped after a time equal to the average residence time has passed. Depending upon the degree of risk that is accepted by the organization, one may wish to be more conservative and define the beginning and end points of nonconforming material using the mean residence time adjusted by some multiple of the variance of the residence time.

While the residence time distribution gives a very valuable indication of how material will track through the process, it assumes that the process is at steady state or that it is operating in a state of control. It does not explicitly reflect how a disturbance in flow or composition of the input streams or a process parameter change within the process will be transmitted to the outputs. This dynamic effect can be captured through the use of a dynamic model of the process and can be empirically observed and quantified through step response experiments. In particular, the simplest form of dynamic response of a process is a first-order response that corresponds to a system in which the dynamic model of the system is linear. As shown in Fig. 1.6, the response of a linear system to a step change in one of its inputs is represented by the classical exponential curve that is characterized by two parameters, the process time constant τ and the process gain K. As the linear dynamic process undergoes a step change in input of magnitude M, the dynamic lag of the process will cause a change in the output, which will only reach 0.632 of its final value in a period time equal to one time constant.

Fig. 1.6
figure 6

Output response of first-order process to step change in input

A typical process exhibiting first-order response is a well-mixed tank that is subjected to a change in composition or flow of one of its input streams and the exponential response is observed in the output. Moreover, the first-order model can be used as a reasonable approximation for any dynamic process undergoing small input changes. To capture the effects of larger deviations in inputs on a general process unit, the nonlinear effects will need to be suitably modeled and characterized. The dynamic characteristics of the continuous process serve as the basis for the design of active control strategies that can compensate for these characteristics in order to minimize the deviations in the output (Seborg et al. 2011).

1.3 Multiunit Continuous Processes

In general, a continuous process will consist of a sequence of unit operations linked by continuously flowing streams, for example, in the form of a fixed piping network. In this section, we will review some of the requirements and consequences of continuous operation of multiunit process trains. These include continuous material transfer between operations, the role of intermediate storage, limitations on process network structure, the impact on residence time distribution, and the implications of hybrid operations involving both batch and continuous subtrains. We will also note some comparisons to multi-operation batch campaigns.

First, the unit operations in a multiunit continuous process will span a wide range, including those typically used in production of small molecule active ingredients: continuous stirred tanks, continuous crystallizers, continuous filters and centrifuges, liquid–liquid extraction units, and distillation columns as well as those used in continuous production of dosage forms: loss in weight feeders, continuous powder blenders, roller compactors, mills, twin screw wet granulators, continuous dryers, and tablet presses. A key characteristic of continuous processing is that the transfer of materials between unit operations occurs continuously, without interruption. Continuous material transfer between units may be driven by various well-known means: gravity or pressure differences, pneumatic transport, pumps, or compressors. However, regardless of the driver selected, the requirement that the continuous flow is maintained can introduce challenges when the flow being transferred is a viscous fluid, a particle blend or a suspension. The properties of the material must be engineered to have reliable rheology and the flow of the stream must be monitored to assure that flow is consistent—in a state of control.

Another important characteristic of continuous processing is that generally holding or intermediate storage of material between unit operations is undesirable for multiple reasons.

Holding between process units raises the undesirable possibility of creating nonuniformities in the material being held—settling is the obvious example phenomena. Secondly, holding has an impact on the overall time constant of the process and thus causes responses to plant-level process control actions to be slower, which means corrections to deviations may be delayed. Moreover, holding also adds to the variance in the process residence time (as further elaborated later in this subsection), thus, increasing the boundaries defining successive lots. Finally, holding creates delays in start-up and shutdown as the contents of hold tanks have to be filled/emptied. The one positive aspect of intermediate storage is that it will dampen the fluctuations or surges in flows that are inputs to the hold tank and thus reduce the deviations that the unit downstream of the hold tank has to accommodate. This can be helpful to the control system since the reduced magnitude of flow deviations will reduce the frequency or need for control action.

The network of connected continuous processing units may in general take the form of a network that involves a variety of processing paths, including both bypassing of some unit operations as well as feedback loops or recycles from downstream units to upstream, as shown in Fig. 1.7.

Fig. 1.7
figure 7

Sulfuric acid process with bypass and recycle streams (Muller 2006)

In Fig. 1.7, the streams into the Interpass Tower Pump Tank are examples of bypass streams, while the split of the outlet from that unit that is returned to the dry tower pump tank is a recycle stream. Of course, there are multiple other streams in the flow sheet as well. By virtue of the need to be able to track lots through the manufacturing process, in pharmaceutical applications of continuous processing, bypass and recycle streams generally have to be avoided. Both structural forms result in the mixing of materials that have seen a different processing history and, therefore, violate the requirement for material traceability. Hence, continuous manufacture in the pharmaceutical domain generally has to follow a serial flow sheet structure.

An important implication of having multiple connected continuous unit operations is that the mean residence time of the sequence of units is the sum of those of the individual units and the variances are additive as well. Specifically, it can be shown that given a residence time distribution for a unit, E(t), if that unit is subjected to a concentration wave represented by Cin(t) then the output concentration wave will be given by the convolution integral

$$ {C}_{\mathrm{out}}(t)=\underset{0}{\overset{t}{\int }}{C}_{\mathrm{in}}\left(t-\tau \right)E\left(\tau \right) d\tau $$

This mathematical operation can be represented graphically as shown in Fig. 1.8, where θ1 is the mean residence time of the unit operation with RTD E(t).

Fig. 1.8
figure 8

Convolution of input and RTD of unit

By successive application of the convolution integral it can readily be confirmed that for a sequence of unit operations, each with its RTD as shown in Fig. 1.9, the cumulative effect is additive in mean and variance. For example, for three units in series, mean and variance are as follows:

θ out = θ 1 + θ 2 + θ 3 + θ in Inner space 0 x E F 07 σ 2 out = σ 2 1 + σ 2 2 + σ 2 3 + σ 2 in
Fig. 1.9
figure 9

Effect on RTD’s of sequence of units

The net of effect of a series of unit operations is a broadening of the residence time distribution and thus in the case of nonconforming material, which arises upstream, the nonconforming material will be distributed across a larger portion of conforming materials. The consequently is that the total amount of material that must be rejected to meet quality specifications will be increased. The clear implication is that it is desirable to divert nonconforming material as close to its first observation as possible. It should be noted that in the above analysis the delay time due to transfer of material between unit operations is neglected. Of course, the effects of material transfer operations can be readily incorporated by simply treating the transfer as a unit operation with its own RTD and mean residence time. Such corrections may well be appropriate for operations such as pneumatic transport where significant back mixing may occur.

By contrast, for a series of batch operations, the total time of the material processed will again be the sum of the residence times in each operation. Of course, that total residence time must be increased by the transfer times between operations (or even more so by any quality control delays), which in general can be quite significant. However, since the material is transferred in discrete amounts between unit operations, any requirement for segregation of nonconforming material is confined to the amount corresponding to the batch size. However, while the broadening of RTD is not an issue for batch operations, rejection of nonconforming materials necessarily requires rejection of the entire batch. In the continuous case, rejection only applies to the portion of the material that is actually observed to be nonconforming, corrected for RTD effects noted above.

An additional contrasting feature of batch operations is that since normally manufacturing operations take place in campaigns, the hold times, quality control (QC) times or other delays between unit operations are not cumulative. Rather, since the batch campaign is characterized by batch cycle time, it is the largest combined batch processing time at any unit operations that matters. This is illustrated in Fig. 1.10, which represents a lean operation in which operations are executed just in time. As shown, material is held in a drop tank after reaction for QC check; material is transferred from crystallizer to centrifuge over time and the cake from the centrifuge is transferred into an intermediate hold tank for transport and loading of the dryer.

Fig. 1.10
figure 10

Batch Campaign Cycle Time

In some cases, it can prove advantageous to create manufacturing lines in which some subtrains consist of batch operations and other subtrains consist of continuous operations that, in general, will need to operate in a semi-continuous fashion. Integrated operation of such mixed, or hybrid production lines, requires the use of intermediate buffer storage. In general, the batch size of the batch subtrain will impose the batch size on the continuous subtrain, although the material produced in the continuous subtrain could have further subdivision into lots. Moreover, to retain batch identify, the intermediate storage should not mix multiple batches; rather, batches must be stored individually. It is of course possible to operate such hybrid lines in a fully integrated “lean” fashion in which the production rate of the continuous subtrain is matched with the average production rate of the batch subtrain. While such operation will minimize the time interval that intermediate material will need to be held in storage, the tight integration will require use of predictive scheduling of these operations. A simple example of such a hybrid operation is illustrated in Fig. 1.11.

Fig. 1.11
figure 11

Hybrid Operation

The conventional direct compression tablet production process, which consists of batch blending, manual transport of bins of the powder blend to the tablet press, tablet production, collection of tablets in a bin and finally tablet coating, is in fact a hybrid production line. Typically, blending and coating are batch operations while tableting is continuous. The batch size is typically defined by the blending stage.

1.4 Requirements for Effective Continuous Processing

The functions that are essential to carrying out continuous pharmaceutical manufacturing are process monitoring, deviation management, and materials tracking. These functions in turn are generally supported and executed using process analytical technologies, process control and intelligent alarm management systems, as well as process data and knowledge management systems. At a more advanced level, these basic functions can be further augmented with real-time release, real-time optimization capabilities, and operations management systems. In this section, we will briefly review these technologies at a conceptual level.

The most rudimentary capability that is required to operate a continuous manufacturing line is process monitoring, that is, the continuous assessment of the state of the line to determine whether or not it is in a state of control. This capability is essential because all real processes are continually subjected to disturbances, which cause deviations from desired product attributes and process variables and conditions, and clearly such deviations must be managed to assure product quality. In general, these disturbances can be of a random nature (often called common cause variations) or they can be disturbances that have a nonrandom component (often called special cause variations). An example of the former is deviations in the API composition of a powder blend, which arise due to the particulate nature of such blends. An example of the latter is fouling of an optical sensor due to the accumulation of process materials on the sensor lens. In Fig. 1.12, the process is undergoing random variations until the end of the time window, at which point the process variable, api content, undergoes a significant variation much larger than the variance of those observed previously.

Fig. 1.12
figure 12

Common and special cause variations

Process Analytical Technology consists of in-line, at-line, and off-line measurement systems, which are used to provide the data of process monitoring. It includes actual sensors and their calibrations, as well as virtual sensors, models that can predict values of unmeasured or unmeasurable material attribute or process variables from process parameters or variables that can be measured. Examples of in-line sensors include those based on the use of NIR, Raman, ultrasound, X-ray, microwave and laser light scattering signals. At-line sensors normally involve technologies that require a longer period of time to arrive at a measurement relative to the dynamics of the process. For example, the time to conduct a 3D scan of a tablet to determine api distribution in the tablet is generally much longer than the time to produce a tablet in the press. Typically, the manufacturing line will be instrumented with a network of sensors, sufficient so that there is some degree of redundancy if and when sensors fail. An example of such a network is shown in Fig. 1.13 for a continuous dry granulation line (Ganesh et al. 2018).

Fig. 1.13
figure 13

Sensor network for continuous DG line (Ganesh et al. 2018)

The most rudimentary use of process monitoring capability is to detect deviations that require some form of intervention to assure product quality. Monitoring methods include classical univariate statistical quality control methods, multivariate statistical process control methods, and data reconciliation (DR) and gross error detection methods. The classical methods serve to track individual attributes or variables and to subject the time series of measurements to statistical tests to establish whether or not actionable deviations have occurred. MSPC methods are used to efficiently track multiple variables simultaneously and are particularly relevant when those variables have significant correlation. DR methods use all of the real-time measurement data, along with prior information about the statistically characterized error in those measurements and a model of the process to predict the most likely state of the process. Appropriate statistical tests are also used to detect gross errors, that is, identify measurement data that are outliers and thus indicators that a special cause variation has occurred (Moreno et al. 2018).

Common cause variations can lead to quality deviations if not corrected in real time. Generally, the correction is provided through suitably designed and tuned process control systems that employ feedback, feedforward, or multivariable control strategies that are based on the use of predictive process models. An example of a classical single-input/single-output feedback control structure is shown in Fig. 1.14. The difference or error between the actual measured value of a controlled variable and the target or set point of that variable is used in the control system logic to determine a correction or control action that will change the value of a suitable manipulated variable. The simplest possible logic might be to mulitply the error by a proportionality constant to determine the magnitude by which the manipulated variable should be adjusted. However, depending on the dynamics of the process and the performance required, more complex controller designs may need to be employed (Seborg et al. 2011).

Fig. 1.14
figure 14

Single-input/single- output feedback control

Moreover, the control system may be structured at multiple levels, for instance, at the unit operation level and at the plant-wide level (Su et al. 2019). Unit level control serves to maintain the operation of an individual unit operation at a desired set point, for example, the control of density of the ribbon produced by a roller compactor by manipulation of roll pressure. Plant-wide level control serves to control important plant operating variables, for example, the production rate of the plant at desired set point by manipulating the flow rates of input materials to the plant. Process control systems are normally implemented to function automatically, without operator intervention. However, disturbances, which are nonrandom, cannot, in general, be handled by conventional process control systems. Such disturbances or faults could be caused by sensor or equipment degradation over time or outright failure of sensors, equipment, control system, or communication system. In general, process faults such as these require active intervention by the operator, although some degree of automation is possible. In order for the operator to be effective in intervening and returning the process to normal operation, it is important to provide the operator with as much help as possible in identifying or diagnosing the likely cause of the fault as well as guidance on actions to take to mitigate or correct the fault so as to avoid process shutdown.

In the continuous industries, the process condition model , represented in Fig. 1.15 for a typical process variable, is used to capture the way in which intelligent operator advisory systems should be structured. In this model, the target operating condition constitutes the optimal operating condition, while the normal operating condition (NOC) constitutes acceptable operation. The process is in a state of control if it is in either the target or NOC region. The Upset condition indicates that the process is not under control and, thus, it is highly likely that the product being produced does not meet specifications. Finally, the Emergency Shut Down condition indicates that the process is operating in an unsafe region, with the potential of equipment damage, and, of course, producing a nonconforming product.

Fig. 1.15
figure 15

Process condition model

The International Society for Automation has issued models and guidance on the design of intelligent operator advisory systems (ISA-18.2, 2009). Ideally, such systems should have some form of diagnosis methodology imbedded that helps the operator determine what the likely cause of the fault is. There exists a large body of fault-diagnosis methods that provide the bridge between fault detection and fault diagnosis, that is, the determination of the cause of the observed disturbance (Venkatasubramanian et al. 2003). However, this methodology only applies to faults that have been previously observed so that both the sensor signature of the fault and the mitigation action are known and have been recorded. Given such a fault library, faults can be diagnosed by looking for a match of the observed sensor signature against those of the library of previously encountered faults. Of course, when a previously unobserved fault is encountered such advisory is not possible. In general, the integration of process control and intelligent alarm management constitutes the core of the deviation management capabilities of a continuous manufacturing line.

While deviation management is focused on keeping the process in a state of control, there remains the issue of how to deal with the consequences of these deviations, namely, the nonconforming material generated during these events. This material will be produced during the course of realization and mitigation of process faults as well as during start-up and shutdown procedures. A reliable system for tracking the nonconforming materials and segregating these materials so as not to contaminate materials that do meet critical quality attributes is an important element of the pharmaceutical quality system of a continuous process (Lee et al. 2015). Once the statistical testing methodology of the PAT system detects a fault, then that material must be traced as it progresses through the unit operations downstream of the point at which the deviation occurs. The time at which the process is returned to a state of control marks the end of nonconforming material production and thus the material associated with this transition time must also be tracked. The tracking logic must take into account the RTD of that process subtrain, as well as the process dynamics associated with the transition from the conforming to the nonconforming state and with the return of the process to state of control. Appropriate in-line sensor data can also be integrated into the tracking strategy to confirm the tracking predictions, which can be based on the use of dynamic models or on estimates based on residence time parameters and process time constants. The tracking will continue downstream until a selected point is reached at which the nonconforming flow can be diverted and removed from the process. The simplest approach is to carry out the diversion at the end of the manufacturing line, for example, to divert the tablets produced from the nonconforming materials to a reject bin. However, given the cumulative effects of the residence times of successive unit operations, diversion as close as possible to the point of location of the first detection is desirable as it will minimize the amount of material that must be rejected. However, this may require shutdown or slow-down of the portion of the line downstream of the diversion point, a potentially complex process that must take into consideration the associated dynamics and control issues. While the tracking and segregation procedures do introduce complexities, the benefit is that the amount of material that is rejected will be much smaller than the rejection of an entire batch.

The process monitoring, deviation management, and materials tracking functions described above depend critically on data, models, and knowledge of considerable extent and diversity. Moreover, they rely on platforms for systems integration and implementation. The data sources include in-line and at-line process sensors for capturing in real time the properties of the streams in the process, equipment status, and process operating data obtained in real time from instrumentation integrated within the equipment, data from at-line sensors and off-line primary test methods and laboratory data from supporting laboratories. Much of this data is recorded and stored in the process historian associated with the Digital Control System that manages the process control functions. Other information is retained in complementary data repositories maintained by supervisory control systems. The deviation management systems may be implemented in a software system for configuring and executing multiple operations management decisions, which include nonconforming material tracking as well as systems for predictive condition-based maintenance of all of the manufacturing resources: the process instrumentation, equipment resources, the control systems, and the models supporting these functions. Comprehensive systems for integrating all of these functionalities are integral to the current wave of Industry 4.0 developments sweeping the manufacturing industries. Moreover, the decades of learnings in automation accumulated in the chemical processing industries have been captured in multiple ISA standards, as shown in Fig. 1.16. By following these standards, pharmaceutical manufacturing can leapfrog the old generation of process data management tools and structures with state-of-the-art implementation using new tools and architectures.

Fig. 1.16
figure 16

Systems integration functions and standards

Whether at the unit operations level or at the plant-wide level, the key decision variables of the process control system of the continuous production line are the controller set points. As the operation of the process is impacted by changes, such as those in the properties of feed materials, a gradual decline in equipment performance, the occurrence of abnormal events, or production of noncompliant materials, the current set points may need to be changed. The new set points should of course be consistent and lead to feasible operation and, if a range of choices are feasible, to be optimal with respect to some appropriate performance criterion. The new set points may only be used for a short period or an extended period of time, depending upon the cause for the change. In the continuous processing domain, the new set of set points can be determined by solving a real-time optimization problem (RTO). Typically, the RTO problem is posed using a mathematical model of the manufacturing line, either steady state or dynamic, depending upon the situation, and a suitable objective or performance function, such as minimization of production of noncompliant materials production. If the expected changes in set points are relatively small—perhaps due to gradual changes in unit operation performance, a steady-state model may suffice, as the RTO will be carried out as needed to maintain the process at an efficient state. However, if the changes are large, such as may be the case when a step change in the desired plant production rate is to be initiated, then a dynamic model may be appropriate. In general, because the RTO problem has to be solved in a time period commensurate with the dynamics of the process, the process model will need to consist of reduced order models of the set of unit operations. One of the typical applications of RTO is to determine the optimal sequence of steps or trajectory that should be followed in starting-up and shutting down a continuous production line. Typically, the determination of start-up and shutdown trajectory is part of the overall control strategy of a continuous manufacturing line (Ierapetritou et al. 2016).

The release of a manufactured lot for distribution to the market constitutes an important decision point, which is designed to assure the customer (via the FDA) that the quality of the drug product is acceptably good and the producer that the manufacturing process is operating with acceptable process capability. It requires the execution of quality control tests to determine that CQAs are met within acceptable limits. In the case of tablets, the tests include standardized measurements, such as, api content, weight, hardness, physical dimensions, and dissolution behavior carried out on a specified number of tablets. Several of the CQAs essential to the release decision are monitored and controlled in the continuous process. However, testing for other CQAs requires longer duration or is destructive and thus either completed batches must be held for QC action or equivalent tests implemented, which would allow batches to be released immediately upon leaving the manufacturing line. Real-Time Release Testing is the term adopted to reflect that the objective is to implement the verification of the CQA as part of real-time monitoring (CDER 2019). One of the ways to implement RTRT of a CQA of the finished drug product that is not measurable in real time is to use a soft sensor or surrogate model that relates the CQA of interest to other directly measured properties. In general, the model will take the form of a semiempirical relation or other reduced order model. The soft sensor employed in RTRT has to be systematically validated against reference measurement of the CQA of interest using at-line measurements or off-line tests. Although at present, lot release of continuously manufactured products follows the traditional off-line QC methodology, the development and evaluation of alternative RTRT methodology for continuous pharmaceutical products is the subject of ongoing research. A related issue is development of release criteria that take into account statistically sound measures of process capabilities based on previous batches of that product produced on that line along with real-time data from the current batch to arrive at estimates of the probability that all dosages of the batch to be released meet CQA requirements.

1.5 Comparative Assessment of Batch and Continuous Operating Modes

Traditional batch processing has served the industry for many years because of the flexibility it offers by virtue of using well-known multipurpose equipment that can accommodate a range of products and product recipes. A batch recipe can be developed and implemented empirically supported by design of experiments with a relatively limited detailed understanding of the chemical and physical phenomena. Once a recipe is found to be satisfactory, it can be repeated to produce batch after batch, using learning by doing, again with limited detailed fundamental understanding. Since processing occurs in discrete amounts of material—the batch size- material can be tracked readily from unit operation to unit operation to finished product and batches released based on well-established procedures. Moreover, traditional batch unit operations lend themselves to processing steps that require long residence times and involve multiphase phenomena.

However, batch processing has significant disadvantages. By their very nature, batch processes are not only dynamic but also cover a wide range of dynamic conditions from the start of the operation to its termination. The batch of material produced is the result of this entire range of conditions and thus the end state is very much path dependent. This makes the result of the operation very much affected by process, environmental and human factors. The human contribution to batch variability includes variation in the initial amount of material charged and variations in the timing of particular actions, including termination of the batch. Furthermore, by virtue of the discrete nature of the processing with starts and stops to fill, empty and transfer materials, equipment does spend quite a bit of time in nonproductive use. This translates to poor utilization of capital resources. Moreover, the scale-up from small batch size bench equipment to pilot plant size and eventually to manufacturing-scale equipment can be problematic, often requiring modification of the recipe to ensure that the same critical quality attribute targets are met. Similar scale-up challenges can arise when transferring recipes from one manufacturing facility to another.

By contrast, continuous manufacturing by its very nature offers high equipment utilization due to the elimination of idle times between operations as well as the time associated with transferring material between successive operations. High equipment utilization translates to smaller equipment capacity or footprint to achieve the same output per unit time. The smaller equipment size and possibility of increasing production by just running longer significantly reduces concerns with scale-up. The elimination of the need to transfer and hold material between operations also reduces the work-in-process inventory and also reduces material losses. Another important advantage is reduced product variability as a result of less material handling and other intervention by humans. While continuous manufacturing does require use of on-line measurement and process control, the on-line instruments see a more limited range of conditions—essentially only variations around nominal operating conditions, thus both required instrument range is reduced and calibration is simplified. Process control likewise involves control around a set point rather than tracking and controlling over an entire dynamic profile. Finally, for specific operations such as exothermic reactions, flow reactors provide much better heat transfer surface per unit volume ratios and thus much better temperature control and as a result safer operation. Flow through continuous units also generally provides improved micro-mixing.

However, continuous operations do require more detailed process understanding, especially of rates of reactions and transport processes in order to properly design equipment. Of course, continuous operation requires real-time measurement and control systems and, as we saw in the previous sections, the methodology for deviation management and material tracking that add complexity. In addition, because of longer operating runs, continuously operated equipment can encounter gradual fouling or degradation of performance over time, which may require special real-time strategies to avoid the need for unplanned shutdown. In general, continuous processes have less flexibility in accommodating different products because different rate phenomena directly impact equipment design capacity. However, in recent years there have been significant efforts to develop small-scale modularized designs of continuous processing equipment that can facilitate flexible assembly of continuous lines from an inventory of standardized components. While the limitations of dealing with long residence times and multiphase systems remain, significant progress is being made to address these limitations.

Given the relative strengths and weaknesses of these two operating modes, it can be expected that they will continue to not only coexist within the industry and even in the same manufacturing organization but will be combined to create hybrid configurations of process subtrains that effectively exploit their relative strengths. However, there remains much to be done to increase the penetration of continuous manufacturing into the pharmaceutical sector (Ierapetritou et al. 2016). This book aims to promote and accelerate this trend.