Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

2.1 Introduction and Overview

Operations research (OR) is the world’s most important invisible profession! OR is everywhere and yet not seen. It is in logistics, health care, urban systems, retailing, energy, mining, entertainment, and much more. OR is concerned with conceptualizing and implementing mathematical models to help solve planning and operating problems arising in the public and private sectors. These models are used to provide decision makers with better insights into their problems and to assist them in selecting the most effective courses of action. In a nutshell, OR uses science and mathematics to provide insight to decision makers, hopefully leading to better decisions. Here, a decision is an allocation of resources.

Homeland security gave birth to OR as we know it today, and thus the OR-Homeland-Security connection is a most appropriate focus for this chapter. OR was born in Great Britain and in the USA at the beginning of World War II, by physicists P.M.S. Blackett in the UK and Philip M. Morse in the USA. The resources to be allocated using OR methods were men and material associated with the war effort (Morse and Kimball 2003). There were numerous successes, including the optimal deployment of the then new, scarce and expensive radar stations in England, to optimal search strategies for destroyers in the North Atlantic—seeking out enemy submarines. B.O. Koopman’s work on search theory was so important that it was classified Top Secret and published in the open literature only in the late 1950s, fully 15 years after its creation (Koopman 1957).

In this chapter we first review briefly some aspects of OR as applied to current definitions of homeland security. In our view, homeland security requires diligent planning for and responding to low probability, high consequence (LPHC) events—such as severe acts of Mother Nature, damaging human-caused accidents and terrorist attacks. Much has been written about OR as applied to homeland security, and we provide some guidance to the literature and some illustrative takeaways. Second, we go into detail applying OR to the planning for and responding to a major LPHC event—pandemic influenza. This work has been carried out over the last 5 years by the authors and our associates at MIT and at the Harvard School of Public Health.

The initial motivation was the threat that the highly lethal H5N1 flu virus, often called “bird flu,” would mutate and become highly transmittable from human to human. Leaders in the international health community feared a repeat of the infamous 1918 “Spanish Flu” pandemic, which killed about 50,000,000 people worldwide in a matter of months. The arrival of the H1N1 pandemic virus in 2009 provided a “dress rehearsal” for such a future killer contagion. The matter is even more urgent now, as the H5N1 flu could, it is feared, be used as a bio-terrorism weapon of mass destruction. This possibility has been brought more sharply into focus now that scientists have discovered sequences of mutations of H5N1 that make it highly transmittable and yet retain its high lethality (Greenfieldboyce 2011).

2.2 OR-Informed Decisions for Homeland Security

We focus on LPHC events and decisions to be made. Decisions come in three flavors: strategic, tactical, and operational. The words are taken from the military roots of OR. They basically relate to the time frames involved: long-range (strategic), medium range (tactical), and real time (operational). But the issues are generic to all types of decision situations. The coach of a football team shares the same three decision flavors:

  • Strategic: How should I build my team by player drafts and trades to have the most potent team possible, subject to all sorts of constraints such as budgets?

  • Tactical: On a week-to-week basis, which players should I field on game day and which ones should I place off-roster?

  • Operational: It is fourth down and one yard to go on our opponent’s 48-yard line. Do I punt the ball to the other team or try to make a first down, recognizing that if we fail, we give the opposing team terrific field position?

OR professionals have a rich set of tools in their toolbox to deal with these types of decisions for football coaches such as Bill Belichick of the New England Patriots (Leonhardt 2004) and for the professionals who have to plan for and respond to LPHC events.

Here is one possible set of decisions related to planning for and responding to LPHC events:

  • Strategic: How many resources should I have that can be devoted to a given type of LPHC event in my state? Think hurricane, earthquake, blizzard, flood, major industrial accident, or terrorist attack. And what additional resources can I call on if my own resources become overwhelmed?

  • Tactical: How frequently should I plan for disaster drills, using not only my own resources but neighboring resources under a mutual aid agreement?

  • Operational: A magnitude 7.4 earthquake centered a few miles south of San Francisco has just devastated parts of the city. Right now, where should I place my limited rescue resources, how do I triage for injuries, and what do I communicate to the public?

Not every aspect of these decisions is guided by OR modeling and analysis. But a significant fraction of them can be. Others are guided by experience, craft knowledge, and common sense.

As one illustrative application of problem framing OR tools, let us examine queueing theory. A queue is a waiting line, and in a disaster, queues will exist in many places. A queue appears whenever arriving “customers” require more service than there is capacity to serve those customers. In supermarkets and coffee shops, queues may be annoying and delay you for a few minutes of your life, but in LPHC disasters they may be long and life threatening. Almost invariably, an LPHC disaster will result in queues having far more service requirements than there is locally available capacity to serve them. Managing these queues effectively is vitally important and requires intelligent triage to prioritize the queue, separate people requiring emergency service into different priority levels, and treat them accordingly. Sometimes the decisions are difficult and tragic, as they were in Pearl Harbor on December 7, 1941, when—to ease pain—nurses provided morphine to critically injured servicemen, whose foreheads were then marked by the nurse’s lipstick with an “M,” “C,” or “D,” representing “morphine,” “critical,” or “deceased,” respectively. Improvisation is often required. Rapid assessment of the situation is vital to selecting and implementing an intelligent triage policy and then administering the best possible service to the sick and wounded. Queueing theory, studied and internalized by planners ahead of time, can lay bare the essentials of service capacity, anticipated delays in queue as a function of the number of “customers,” the consequences of alternative prioritization or triage strategies, the numbers of required servers, and more. The Walt Disney World theme parks employ 15–20 OR analysts to study and “optimize” queues in their theme parks. Planners for LPHC events must do the same—for situations much more critical than amusement park ride delays.

Queue delays grow highly nonlinearly with the numbers of “customers” and in fact can exponentially explode, resulting in intolerably huge queues. Even queues operating in non-emergency situations can become large due to the variability in the system, i.e., unscheduled times of arrivals of customers and widely varying service requirements—meaning service times. Anyone planning for disasters should have a staff member study the essentials of queueing theory to help create a feasible and effective response plan that treats the most critical “customers” first. The plan must also include the procedures to bring in additional queue “servers,” often via local and regional mutual aid agreements. As teaching guides, accessible applications of queueing theory to day-to-day public safety systems are readily available (Larson 1972).

Queuing theory is but one tool in the OR tool box. There are many others, including optimization methods such as linear programming and dynamic programming, transportation network analysis, and decision theory. Most of these tools have relevance to planning for and responding to LPHC events.

For those who wish to explore further the general application of OR to LPHC events, we suggest some publications previously written by one or more of us (usually with other coauthors). And each paper contains many references that are also important for LPHC events. The first is “Disasters: Lessons from the Past 105 Years” (Eshghi and Larson 2008). This overview paper shows an alarming reported increase in frequency of disasters over the most recent century-plus time period, some of the increase due to improved monitoring technologies (e.g., for earthquakes) and some due to population growth where people now live in more-disaster-prone places. The second is “Responding to Emergencies: Lessons Learned and the Need for Analysis” (Larson et al. 2006). This paper contains historical reviews of five major emergencies—the Oklahoma City bombing (1995), the crash of United Airlines Flight 232 (1989), the sarin attack in the Tokyo subway (1995), Hurricane Floyd (1999), and Hurricane Charlie (2004). The paper draws lessons from these five LPHC events and outlines required additional OR research that is needed to cope better with similar disasters in the future. The third is “Decision Models for Emergency Response Planning” (Larson 2005). This book chapter covers OR not only as it applies to response to major disasters including terrorism but also to routine public safety operations (police, fire, and ambulance) and to dangerous operations such as transporting hazardous materials.

2.3 Pandemic Flu: Background

Those who have heard of OR but have not studied it may think that the only applications of OR to pandemic flu are strictly operational. A popular idea for instance is to apply queueing theory to the system of vaccine dispensing in a public clinic, with the goal of dispensing the vaccine quickly and fairly. While this certainly can be done, and should be done, it represents only a tiny fraction of the myriad decision problems that arise when planning for and responding to pandemic influenza. And many, perhaps most, of these decision problems can be better informed by OR problem framing and analysis.

In considering vaccines to immunize people against a particular flu strain, we expand our OR problem framing beyond the public clinic offering the vaccine all the way back to the origins of the new flu virus, the creation of a new vaccine, and its distribution and administration throughout the country. This is a process that involves designing a new product, manufacturing it in sufficient quantities to satisfy customer demand, transporting it to regional distribution centers, deploying it to local centers (such as flu clinics), and then delivering it to customers (susceptible members of the population). This is a typical OR supply chain analysis problem. It requires analysis of the entire system, not just the queueing at the end of the process. Clinic queueing may be optimized, almost reduced to zero in the clinics, but that is of no consequence to citizens who are susceptible to the flu if the vaccine does not arrive in time to help them avoid infection.

Vaccines traditionally have been considered the most effective societal interventions to mitigate the impact of an epidemic of infectious disease such as influenza. After the initial cases are reported, and when the specific, causative strain had not been anticipated, a period of up to 6 months will elapse before immunization can be expected to protect members of the population from contracting the disease. During this time, the infectious agent needs to be characterized, and the vaccine must be configured, tested, manufactured, and distributed. Some additional days will be needed until the persons to whom the vaccine was administered develop immunity. Scientific and technological advances will someday reduce these time lags. Until then, or unless an efficacious “universal” flu vaccine is developed, the elapsed time between first cases and vaccine availability is, unfortunately, unavoidable.

Our flu case study contains two principal themes. The first is to examine the experience of vaccine allocation and distribution in the USA during the 2009 H1N1 influenza pandemic. We examine whether vaccine was administered in time materially to affect the course of the disease outbreak. The second theme is to make use of OR approaches to frame the question of whether there is a better way to distribute and allocate vaccine on the basis of the “dynamics” of the disease, which might allow substantial quantities of vaccine to be sent to geographical regions where they could be expected to offer the greatest benefits to the population.

2.4 Vaccine Allocation and Distribution During the 2009 H1N1 Pandemic

First, let us look at how vaccine was actually distributed in the recent influenza outbreak, in relation to the outbreak of cases of illness. We compared the statewide case incidence of influenza over time to two sources of vaccine distribution data. In the USA, all vaccines are developed and allocated by one governing body, the CDC. Our first set of data is the vaccine shipment data, which track the number of vaccines shipped to each state over time. This information was provided on a weekly basis by the CDC during the initial vaccination period (Centers for Disease Control and Prevention 2010). We obtained these data for 50 states. The second source provided data on vaccine actually administered, as each health-care provider was required to report quantities to state health authorities before being given additional vaccine. These federally mandated vaccine administration tracking data were aggregated on a weekly basis and forwarded to the CDC (Centers for Disease Control and Prevention 2012). We obtained this information from individual state health departments for 11 states (Finkelstein et al. 2011a).

Accounting for an 8–10 day delay once an individual receives vaccine before the development of immunity (Washington State Department of Health 2010), and observing from our data a delay of at least a week between vaccine shipment and delivering vaccinations into people’s arms, some 2 weeks likely would elapse from the first week of vaccine shipment (Week of October 10, MMWR Week 40) before the first vaccinated members of a state’s population would be protected from contracting the illness. We observed that in 24 of the 50 states the epidemic had already begun to decline before any individuals receiving the vaccine likely would have been protected from contracting the infection. Further, among the 11 states for which we were able to obtain data on actual vaccines administered, no more than 2% of the population received vaccinations before the outbreak had peaked. Consider Fig. 2.1, where we illustrate the epidemic curves of the USA as a whole together with two states with very different timing of H1N1 epidemic curves.

Fig. 2.1
figure 1

Comparison of USA, Georgia, and Maine epidemic curves

Note that while Maine received its vaccines in time to prepare for the oncoming epidemic, Georgia received its vaccines well after the peak. Figure 2.2 summarizes the “early/late” situation for all 50 US states.

Fig. 2.2
figure 2

Timing of shipment of first vaccine with respect to the peak of the infection in US states

After an outbreak of contagious illness has peaked, it is, of course, still possible to contract it, but much less likely. So, vaccinating population members would be expected to continue to offer some, albeit declining, benefit. However, the decline in the number of new cases is synonymous with the observation that “herd immunity” has been achieved, at which time the risks of transmitting disease decline rapidly.

Both the disease occurrence data and the vaccine availability data are subject to limitations. In the USA, the data collection channel from individual health-care providers and institutions through state health departments to the CDC suffers from chronic underreporting. On the other hand, some speculate that the “worried well” report to hospital emergency rooms, prompted by media coverage, leading to overestimates of true influenza-like illness (ILI) incidence. Despite these possible complications, we believe that the available data support the broader observation that relatively few individuals had the opportunity to be vaccine-protected until after the risks of acquiring the flu from others were dramatically reduced.

2.5 Steps to Take Prior to Arrival of the Vaccine

The data we present characterize a situation in which vaccine was not available until well after influenza incidence peaked and began to decline in half of the states. In the absence of faster and more reliable vaccine production methods, it is likely that such a situation will occur again. Our results reinforce the question: what can public health officials do to reduce the risk of contracting flu before vaccine is available? What can individuals and families do?

A sensible approach, prior to arrival of vaccine, would be to focus on Non-pharmaceutical Interventions (NPIs), especially diligent hygienic behavior and reduction of human-to-human contacts. Until scientific and technological advances serve to reduce significantly the time needed to develop, test, produce, and administer vaccine, NPIs may be the only viable recourse to protect the millions of at-risk individuals.

To understand the potential very positive effects of NPIs, we need to discuss briefly an equation that relates human behavior to disease spread. As anyone who has seen the film Contagion (Contagion 2011) knows, the fundamental parameter of disease progression is R 0, where

R 0 = the mean number of new infections caused by a newly infected person at the start of the epidemic when nearly everyone is susceptible to the disease.

To be an epidemic, we must see R 0 > 1.0. That is, each newly infected person will—on average–infect more than one additional person at the start of the epidemic, thereby creating exponential increases in the numbers infected. If R 0 < 1, then there is no exponential increase and no epidemic; the disease simply dies away.

Here is the good news: While R 0 does depend on characteristics of the particular flu virus, it also depends strongly on the behavior of humans. Close proximity and lack of good hygienic behavior can dramatically increase R 0. Likewise, reducing proximity to others by “social distancing” and being diligent in hygiene (such as vigorous hand washing several times a day) can dramatically reduce R 0. Here is the OR equation that summarizes these relationships:

$$ {R_0} = \lambda p, $$

where

λ = average number of daily face-to-face human contacts and p = “transmission probability” = conditional probability that an infectious person will pass on the infection during a random face-to-face contact with a susceptible person (Larson and Nigmatulina 2010).

We can control λ by reducing our number of daily face-to-face human contacts. And we can reduce p by hand washing; not touching our mouth, nose, and eyes; not shaking hands with people (perhaps substituting the “elbow bump”!), and more. In a typical flu situation, R 0 is usually in the range of 1.2–1.8. Only a 30% reduction in each λ and p can bring R 0 down to below 1.0! If everyone did that, we could—in theory—by human NPI behavior alone stop the spread of the flu. Even if we cannot accomplish that goal, we can dramatically reduce R 0, thereby reducing the chances that we and our loved ones will become infected. This is the best we can do prior to the arrival of the vaccine. Additional NPI strategies are discussed in two recently published papers (Finkelstein et al. 2009, 2011b).

2.6 Using Operations Research Approaches to Find a Better Way to Allocate Vaccine

We now use OR approaches to examine what went wrong with vaccine allocation and distribution during the 2009 H1N1 pandemic. We fit a model to this disease occurrence, generate a similar model assuming no vaccine, and then estimate the numbers of averted cases of illness. This information then informs our decisions about deploying available vaccine in the future.

To estimate the number of infections averted from various vaccine programs, we first use available data to estimate the epidemic curve during the period in the fall of 2009 when H1N1 was most prevalent. Once we obtain an estimated epidemic curve, we use it in conjunction with reported vaccine administration data to fit the observed epidemic curve to the one generated by a mathematical model based on difference equations. We use a discrete-time version of the standard Kermack–McKendrick model to estimate infection spread within each state (McKendrick and Kermack 1927). In the model calibration process, we estimate the relevant parameters such as R 0 within each state. We estimate the R 0 for each state individually, since different states have different demographic, geographic, and cultural attributes. Moreover, states experienced the H1N1 outbreak at different times and implemented their vaccination programs in different ways (Hopkins 2009), so the extent of infection varied markedly. We then estimate a different, non-observable flu wave curve for the scenario in which no vaccine is available. How do we do that? We use the same model, just calibrated to reported data, but remove the vaccine component of the model. That is, we obtain an estimate of what would have happened if no vaccine had been delivered. This multi-step process provides a data-informed, model-supported basis for estimating the positive effects, if any, of the vaccine as administered in each of the states.

We refer the reader to Larson and Teytelman (2011) for the detailed discrete-time equations.

2.7 Results

Consider Oklahoma as an illustrative example of the modeling process. Figure 2.3 contains (1) the epidemic curve for the US state of Oklahoma and (2) the time-sequenced vaccine administration data reported by the state.

Fig. 2.3
figure 3

Estimated Oklahoma epidemic curve compared to vaccines as administered in the state

In Fig. 2.4 we again include the empirical epidemic curve and three model-generated epidemic curves:

Fig. 2.4
figure 4

The estimated epidemic curve along with the model-generated curves with and without vaccines

  • The curve generated by using the Oklahoma-reported vaccine administration data fitted to correspond best to the empirical epidemic curve.

  • The curve generated in the hypothetical case where vaccines were not administered at all.

  • The curve generated in the hypothetical case where vaccines were administered 2 weeks earlier than had actually occurred.

The total number of infections in Oklahoma is calculated by finding the area under an epidemic curve. The estimated effect of the vaccines administered in Oklahoma can be determined by calculating the area between the “actual” model-generated curve, and the “no-vaccine” model-generated curve (Fig. 2.5).

Fig. 2.5
figure 5

A closer look at Fig. 2.4. (The shaded region represents the difference between the estimated number of H1N1 infections that would have occurred without intervention of vaccines and the estimated number of infections that actually occurred with the vaccine)

We analyzed 11 states in the same manner and inferred the total number of infections that have been prevented as a result of their respective immunization programs. Officials in Illinois, Indiana, Massachusetts, Mississippi, Montana, New Jersey, New York, North Dakota, Oklahoma, South Carolina, and Virginia graciously provided us with precise data on vaccines as they were dispensed throughout the outbreak. In Table 2.1 we display two cases for each state:

Table 2.1 Summary of best-fitted values for R 0 and model-determined effects of vaccines as they were distributed
  1. 1.

    The optimistic case, in which all vaccines are effective immediately and are 100% effective.

  2. 2.

    The pessimistic case, in which vaccines are effective 2 weeks after administration and are effective for only 75% of the individuals receiving the vaccine.

The actual effect of the vaccine should lie within the range specified by these two cases.

2.8 Discussion

While examining the estimated number of infections averted, we can identify two major contributing factors. The first is the total number of vaccines administered to the general population. The second is the timing of the vaccine administration with respect to the peak of the infections. Once the H1N1 virus had been identified as a potentially devastating pandemic in the spring of 2009, the CDC worked to develop and distribute H1N1 vaccines. These vaccines were sent out to the individual states at the same time, and first doses were administered on October 5, 2009. These vaccines, however, had varying effects as the peak of the outbreak in different states occurred at markedly different times.

The peak of infection usually occurs when “herd immunity” occurs, which is the time when every infectious person at that time infects on average just one other person. At this point, R 0 no longer applies, as many of people have recovered from the disease and are now immune to further infection. The “real time” R 0 at time of herd immunity, called R(t HI), where t HI is the time of herd immunity, is equal to 1.0. At the time of herd immunity, the number of infections in the next generation is approximately the same as it was in the previous one. We say “approximately” due to statistical fluctuations in the actual number of susceptible people that any one newly infectious person will infect. Soon afterwards, infectious people no longer replace themselves with newly infected individuals, and so the number of infected and infectious people in each generation decreases. Earlier administration of vaccines decreases the number of people who still “need to be infected” for the population to reach herd immunity and decreases the height of the peak. Late administration of the vaccine has almost no effect on the dynamics of the outbreak, and has little benefit to the society other than immunizing the people who received the vaccine. Such late immunizations may be important if the flu were to return later in a new wave.

Consider again the southeastern states of the USA, the first region to report infection peaks. As early victims they received vaccines after the worst of the infection had already passed. Louisiana, Indiana, and South Carolina did not start administering vaccines until after the peak of the outbreak. And these states were least successful in averting infections. On the other hand, Massachusetts and Virginia started administering their vaccines 5 and 3 weeks, respectively, before their respective peaks. These two states enjoyed a particularly good impact from their vaccination programs. In addition to having 5 weeks of vaccinations prior to the H1N1 peak in Massachusetts, that state vaccinated 29% of its population, the most of any state in our sample. As a result, as much as 7–14% of the population may have been spared infection and possible complications from influenza. While Massachusetts and Virginia had effective experiences with their vaccinations, most states did not. On average for our sample, vaccines were delivered just before the peak of the states’ outbreaks.

To quantify the effect of time in averting infection we consider one of the states that vaccinated almost 20% of its population, Indiana. The hypothetical case where the same number of vaccines is delivered just 2 weeks earlier resulted in averting more than twice the number of infections than the vaccines as administered. Timing is everything!

With a more granular approach, we considered the marginal benefit of administering just one vaccination at a given time. We mapped the total projected number of infections that could be averted if just one vaccination were to be administered to a random susceptible person at different times during the outbreak. That is, we calculated the total number of infections that would occur in Indiana if exactly one vaccine were administered at different points in time, and compared that number to the total number of infections that would happen if no vaccines were administered at all. The differences are presented in Fig. 2.6. As expected, administered vaccines have a monotonically decreasing benefit with respect to time. A striking feature of Fig. 2.6 is the fact that one vaccination to a susceptible person well before the flu wave starts averts almost two infections in the population, even with a low value for R 0 (1.15) and even considering the fact that the vaccinated person has a greater than ever chance of never becoming infected assuming no vaccination. Clearly, vaccines administered well before the peak carry the added benefit of diluting the susceptible population with immune people and are particularly useful in mitigating the spread of infection.

Fig. 2.6
figure 6

The graph shows the number of infections averted by administering exactly one vaccination to a susceptible person at different points throughout the outbreak

Another insightful feature of the graph of Fig. 2.6 is the slope of the marginal benefit curve, which represents the time-dependence of effective vaccines. While starting vaccine administration in Indiana in July would be most effective for Indiana, the effect of those vaccines would not change significantly until the beginning of September. That is, if vaccines were to be available in July, Indiana could have waited to receive its share until September with minimal losses. Similarly, vaccines received after December will have the same (minimal) effect whether they are administered in December or February. The effectiveness of vaccines, however, is extremely time-sensitive from the end of September to mid-November, where each week results in a significant loss of effectiveness. Vaccines that become available during this critical period need to be administered as soon as possible. These results encourage us to recommend a more detailed cost–benefit analysis of trying to get some vaccine, even if in much smaller quantities, to the states at the beginning of this “critical period,” when the population is particularly sensitive to the timing of vaccination. A small amount of vaccine delivered early should have a more significant effect on the total number of infections averted than a batch delivered just a few weeks later.

The timeliness of the vaccines also appears to be closely related to the total amount of vaccine accepted by the population. Largely dictated by political considerations, the CDC distributes its vaccines proportionally according to the population of each region. This allocation process does not minimize the total number of infections incurred nationally. Once the regional peak of the outbreak had passed and H1N1 had been determined to be less dangerous than originally feared, the populations of the “early victim” states were less likely to obtain a flu vaccination. Their decision could relate to time spent getting the flu shot and perceived risks’ possible side effects. States vary considerably in the amount of vaccine that was actually used. Mississippi used less than 40% of its allocated vaccine, most likely due to “flu fatigue.” While the media are particularly helpful at warning the public of an ongoing pandemic and encouraging diligent hygienic behavior and social distancing through school closures and cancellation of public events, they can also give the impression that the outbreak is over or has been blown out of proportion—thereby creating flu fatigue.

2.9 Thought Experiment

As shown in Fig. 2.7, in the first few weeks of vaccine distribution, when demand for vaccine clearly exceeded supply, the CDC allocated vaccine to states proportionally to their populations (Centers for Disease Control and Prevention 2010). Particularly in early October, this simple distribution scheme ensured that all states received amounts that could be used to immunize the same proportion of the population. Come November, those states that saw little demand started placing fewer orders for vaccine, while those with later epidemics like Massachusetts and Virginia were still experiencing high demand and were shipped larger quantities of vaccine, confirming the intuition from Fig. 2.6.

Fig. 2.7
figure 7

CDC distribution of vaccines, initially approximately proportional to state populations

Thus we see that the same vaccines administered in states that already experienced the peak of the infection at the time vaccines started arriving were much less effective than those administered in states that had not yet experienced the peak. States that were past the peak saw less demand for vaccines and thus used only a fraction of their allocated vaccine. Consider a side-by-side analysis of Mississippi and North Dakota, as shown in Table 2.2.

Table 2.2 Comparison of vaccination programs in North Dakota and Mississippi

It is clear that vaccines administered in North Dakota were significantly more effective than those in Mississippi. In fact, the Mississippi vaccines had almost no effect because the infection was barely spreading by the time vaccines became available. Coupled with this, North Dakota was experiencing more demand for the vaccines at the beginning of its program. Motivated by this we believe that there is a need for more effective procedures of allocating vaccines to US States.

Allocating all of Mississippi’s vaccine to North Dakota would be not only unethical but also politically infeasible. Instead, as a thought experiment, suppose that just 20% of Mississippi’s unused vaccine were to be transferred to North Dakota during the first 4 weeks of vaccine distribution. Suppose that, with this addition, 60% of the new vaccines were actually administered. This additional vaccine would decrease the total number of infections in North Dakota by 5%. That is a significant improvement for a relatively small cost. An adaptive decision like this can be made during the allocation process. We can form even approximate predictions about how much vaccine will actually be demanded by the state and how effective the extra vaccines would be in reducing infections. For example, by using data collected from our 11 states, we can weakly estimate that a state that experiences the peak of infection 6 weeks before receiving the first shipment of vaccine can be expected to vaccinate no more than 4% of its population in the first 4 weeks of the vaccination program. In the first 4 weeks of the 2009 H1N1 pandemic, the CDC allocated to Mississippi enough vaccine to cover 7% of its population. With accurate data, some portion of that could have been redirected to states that are more likely to use and benefit from the vaccine.

Flu-specific takeaways:

  • Early vaccines are significantly more effective than late vaccines.

  • Data shows that vaccines that start being administered early on in the epidemic tend to have a higher acceptance rate in the population.

  • There exists a time-sensitive “critical period” within a few weeks of the peak of an epidemic where the effectiveness of the vaccines is extremely time-dependent. This is the time when it is important to vaccinate as much of the population as possible.

  • Practitioners taking these points into account and combining them with OR resource allocation methods will see significant improvements in the benefits of available vaccine.

2.10 Conclusion

The use of OR methods to address homeland security issues, such as an influenza pandemic, forces a logical and systematic examination of the whole problem, rather than just certain parts of it. Commonly, the result is to uncover relationships that are non-intuitive, or even counter-intuitive. An improved understanding of the whole problem can benefit planning at strategic, tactical, and operational levels.

Let us again consider the context of the allocation of vaccine by the CDC to the individual states once a pandemic is already underway. If and when a southern state, for example, experiences a wave of cases, a natural tendency would be to make a tactical decision to ship via express courier the state’s proportional share of the vaccine to address what appears to be an imminent, growing problem. But the total system situation is far more complicated than a one-state-at-a-time decision analysis. Consider Scenario 1: The southern states are just now experiencing a rapidly rising flu wave and the northern states do not yet have the flu. In this situation, the southern states should receive priority and be shipped a number of vaccines above their proportional population share. This is because the northern states can wait without significant penalty. Now consider Scenario 2: The vaccines are delayed and the southern states have all reached peak, herd immunity has occurred, and their respective flu waves have started to decrease. By the time the vaccine reaches the southern state in question, many who were sick would have recovered, are now immune, and risks to remaining healthy persons have become much less. A better decision would be to increase the allocation to more vulnerable northern regions that had so far seen few cases and must have many more susceptible individuals.

The results of one OR analysis might suggest the need for others. For example, it is highly likely that a time interval will exist after cases of illness are recognized, but before vaccine can be produced and distributed. The value of diligent hygiene and social distancing, especially under these circumstances, are already widely recognized. In planning for future outbreaks of illness, it would be useful to quantify the benefits of devoting resources to systematic campaigns to modify certain patterns of personal behavior, in relation to the effort’s costs.

A great benefit of operations research is that it forces a logical and systematic consideration of all aspects of a problem. Pandemics and other LPHC events gravely threaten lives and the security of our homeland. We are hopeful that the approaches we have described offer the prospect of mitigating the future impact of these kinds of adverse events.