Keywords

Inflammatory Diseases: A Pox on All Our Houses

We are currently faced with a barrage of complex diseases that often coexist in the same patient [1]. In the developing world, the modern disease landscape is a constellation of acute and chronic infections, traumatic injuries, and nonhealing wounds; diseases that are made even more complex due to the impact of malnutrition, war, and displacement [2, 3]. In the industrialized world, we face some of the same challenges with regard to infections, trauma, and wounds, but these diseases are complicated by lifestyles of excess and the attendant metabolic irregularities (diabetes and obesity) [4]. In addition, the generally longer lifespans now being experienced around the world have paradoxically resulted in the rise of aging-related diseases, such as cancer and various neurodegenerative diseases [5]. Given the degree and extent of medical care in the first world, it is virtually guaranteed that a common pathway for patients with this range of diseases is to spend at least some time in an intensive care unit with critical illness manifesting with multi-compartment pathophysiological derangements and organ failure [6]. Critical illness can result directly from trauma, hemorrhagic shock, and bacterial infection (sepsis). On its own, trauma/hemorrhage is a leading cause of death worldwide, often leading to inflammation-related late complications that include sepsis and multiple organ dysfunction syndrome (MODS) [7,8,9]. Sepsis alone is responsible for more than over a million annual hospital admissions, more than 215,000 deaths in the United States per year, and an annual healthcare cost of over $20 billion [6, 10, 11], while traumatic injury remains the leading cause of mortality and morbidity for individuals under 55 years and accounts for 30% of all life-years lost, with over 190,000 lives lost annually in the United States [12, 13]. There is currently not a single approved pharmacological therapy, other than antibiotics, targeting the pathophysiology of critical illness [14].

It is now clear that the acute inflammatory response, with its manifold manifestations at the molecular, cellular, tissue, organ, and whole-organism levels, drives outcomes in all the aforementioned diseases and is central to the pathophysiology of critical illness. Properly regulated inflammation allows for timely recognition and effective reaction to threats to an individual, be it tissue damage resulting from injury or infection from pathogenic microbes. However, when the insult is too great, or repetitive in nature (as seen in chronic inflammatory and autoimmune diseases), inflammation can become disordered and result in ongoing tissue damage and organ dysfunction [15]. We assert that critical illness is the most dramatic manifestation of disordered, dysregulated, and mis-compartmentalized inflammation [6, 16,17,18,19]. Thus, the presence of a robust, evolutionarily conserved network of inflammation [20,21,22], able to respond to heterogeneous insults and tuned for effective containment, yet paradoxically capable of driving and propagating host tissue damage, results in disease states that are fundamentally resistant to reductionist characterization. This property of critical illness is the basis for the lack of effective mechanism-based pharmacologic therapies, and accounts for the fact that even life-saving/perpetuating measures, such as mechanical ventilation or hemodialysis, may have detrimental effects through the induction of additional inflammation [23,24,25].

Insufficiencies in the Current Process of Drug/Device Design and Executing Clinical Trials

For a therapeutic drug or device to reach its ultimate end user—the patient—a multistep process must be carried out, culminating in approval by regulatory agencies. This process generally consists of years/decades of basic research to identify candidate therapeutic targets, followed by sequential studies to demonstrate safety and some acceptable degree of efficacy (e.g., dosage or timing that results in greatest therapeutic benefit with least harm) in both experimental animals and humans. This process typically concludes with a pivotal (Phase III) clinical trial, which is randomized (i.e., subjects that meet predecided inclusion and exclusion criteria are recruited into either a placebo or treatment arm in a random fashion) and double-blinded (i.e., neither the clinician nor the patient knows a priori the study arm in which the patient is enrolled) [21, 26,27,28,29]. The enrollment into this Phase III trial is usually not individualized in any fashion beyond the set inclusion and exclusion criteria (and, of course, the withdrawal of a patient from the study if certain predecided adverse events occur). This process is considered the sine qua non of the scientific method, and it has indeed resulted in numerous drugs and devices available to physicians to treat diseases, though there has been a recent focus on novel, “adaptive” clinical trial designs [30]. However, it is important to recognize the difference between other domains where adaptive trials have been used and that of critical illness: namely that other disease processes, such as cancer and cardiovascular disease, have known and proven therapies for which the adaptive trials serve to aid in subgroup selection, dose optimization, and multimodal treatment efficacy. Therefore, while adaptive trial design can be of potential use in critical illness, the seeming goal of finding some subgroup in which an already failed compound can “possibly” be effective is asking the method to answer a question it was not designed to do [31].

There is a fundamental gap between preclinical studies and clinical trials. To begin with, the disease being targeted is usually thought of in a reductionist, static way as a series of discrete “stages” or “syndromes” rather than as a dynamic, stochastic progression of biological events driven by initial conditions and genetically determined parameters that, upon reaching certain multidimensional thresholds, leads to multiple outcomes. This discrepancy leads to the design of drugs that are targeted to ostensibly diagnostic symptoms rather than to underlying causes of the disease as a whole. Next, a highly linear (cause–effect) view of the biological pathways is presumed to underlie the various discrete symptoms, leading to the generation of drugs absent any consideration (at this initial stage of drug development) of impact on other pathways, cells, tissues, and organs. Finally, the statistical approaches commonly used to structure and analyze clinical trials typically make a number of questionable assumptions; for example, that variables are normally distributed, that a marker of patient state is equivalent to a mechanistic driver of that state, and that such a marker of patient state will be altered in a statistically significant fashion as a function of therapeutic efficacy [19, 32]. Below, we discuss how these general features of the healthcare delivery process manifest in therapies for acute inflammatory diseases, with a focus on critical illness.

Inflammation in Critical Illness: Rational Systems Approaches for a Complex Therapeutic Target

The flaws in—and the fragmented nature—of the current healthcare delivery paradigm have led to the recognition of the need to address complex interplay between inflammation and physiology in critical illness, manifesting in divergent group outcomes and heterogeneous individual trajectories [6, 19, 33, 34]. Initially, there was hope for some improvement in this situation through the adoption of “omics” methodologies, with their theoretical capability of interrogating the complete responses of cells and tissues in individuals (and thereby both improving the mechanistic understanding of critical illness in general and enhancing diagnostic and treatment capacities in individuals) [35,36,37,38,39,40,41,42]. While this approach has resulted in key contributions to the understanding of molecular pathways induced by injury and infection in humans [43,44,45,46,47], as these techniques have become more commonplace there has been a growing recognition that more data do not necessarily lead to better—or any—explanations for the phenomena from which those data are derived. Thus, these “omics” methods have not proven to be the panacea for the design of drugs, clinical trials, and diagnostics that they were projected to become. Thus, despite extensive interest in the use of data-driven modeling (colloquially referred to as “artificial intelligence”) in clinical trial design [48], from a practical standpoint, there are multiple challenges to implementation of these purely data-driven, descriptive approaches in the healthcare delivery chain [9, 16, 17].

In contrast to data-driven, descriptive modeling, mechanistic computational simulations depict the behavior of biological interactions (e.g., among cells, their products, and the outcomes that result under a given set of conditions) dynamically. Such dynamic computational models and simulations may be used as “knowledge stores” that may be queried as to the emergent behavior of the sum total of known or hypothesized reductionist biological interactions [49,50,51,52,53]; to suggest novel interactions not yet described by experimental data [54]; and to address controversies based on diverse experimental/clinical conditions or other experimental differences among groups studying any given complex biological system [55]. Unlike data-oriented, descriptive models, dynamic mechanistic models offer the possibility of prediction outside of and beyond the data on which they were developed [9, 16, 17, 56, 57]. We have extended the classical systems biology approach to that of Translational (i.e. clinically applied) Systems Biology as systems and computational biology methods have matured and begun to take on characteristics, features, and operating principles of engineering [18, 19, 29, 56, 58, 59].

Indeed, the computational modeling toolset now available for integration into the healthcare delivery pipeline is rich and suited to diverse tasks. Translational dynamic mechanistic modeling used to date in acute inflammation and other phenomena related to critical illness can be divided into two general types: continuous methods, generally employing differential equations (either ordinary or partial) and particularly useful in settings involving data that reflect the mean field approximations of behavior of a biological system, for example, the concentrations of molecules in a biofluid [57, 60,61,62,63,64,65,66,67,68]; and discrete methods , most notably agent-based modeling for settings in which spatial pattern/image data are involved, or for prototyping initial computational models of a complex system [54, 69,70,71,72,73]. These various method have their respective strengths and weaknesses [29, 58, 74,75,76], and have all been used in the setting of critical illness [20, 21, 29, 53, 56, 58, 74].

Dynamic computational modeling has improved our knowledge of the basic biology of inflammation, and, directly or indirectly, led to translational applications in critical illness [6, 9, 16,17,18,19,20,21, 29, 56, 76, 77]. One key translational application, namely the in silico clinical trial, was pioneered in the arena of critical illness [57, 61, 71, 78]. The potential use of mechanistic computational modeling in the diagnostic arena is evidenced by studies showing the potential to predict the individual inflammatory and pathophysiologic outcomes of individual human subjects [57, 79] and large, outbred animals [80]. Thus, it is now theoretically possible to predict and impact the outcomes of individual critically ill patients using patient-specific computational simulations, likely informed by genetic data and assessment of circulating inflammation biomarkers [53, 56, 57].

Given the multiscale complexity of the disease processes, we suggest that it is imperative to not merely identify candidate molecules, but also to determine if the higher-order, system-level consequences of attempting to intervene in a particular pathway will lead to an ultimately beneficial or detrimental outcome [19, 81, 82]. We have pointed out the need for a computational approach to dynamic knowledge representation as a means of hypothesis instantiation and testing [19, 53, 83]. In the context of translating molecular-level mechanistic hypotheses up through the various steps of the healthcare delivery continuum, this process is envisioned as allowing one to determine if the assumptions regarding manipulating a given biological interaction at a given scale of organization (typically the molecular/cellular scale) is likely to behave as expected at another, typically higher scale (e.g., tissue, organ, or the entire organism) [19]. In this way, one may identify effects that would otherwise be considered “unanticipated.” Dynamic knowledge representation may be augmented with insights derived from high-throughput/high-content data [53], along with appropriate data analysis and data-driven modeling [22, 56, 59], in order to generate and parameterize mechanistic computational models of disease, patient [56, 57], or population [21, 29, 57, 73].

Dynamic Knowledge Representation in the Context of In Silico Clinical Trials

A key example of the in silico clinical trial as a form of dynamic knowledge representation can be seen in the simulated clinical trials of existing and hypothetical antimediator interventions for sepsis [61, 71, 78], trauma [57], and wound healing [73, 84]. Importantly, these simulated trials were based on the knowledge available at the time the actual clinical trials were performed. Highlighting the power of computational modeling as a high-throughput test bed for novel therapies, the early in silico clinical trials simulated a series of existing [61, 71] and hypothetical [71] therapies targeting inflammatory mediators-based therapies. In one case, a simulation of neutralizing antibodies to proinflammatory cytokines was implemented in an agent-based model (ABM) [70, 71]. This dynamic computational model reproduced the general disease dynamics of sepsis and multiple organ failure and was used to generate a simulated population corresponding to the control group in a sepsis clinical trial. A similar approach was used in a contemporaneous study focusing on replicating the failed antitumor necrosis factor-α (TNF-α) clinical trials in sepsis, demonstrating that the presence of patient subgroups that were harmed by this drug as well as others that were helped (culminating in no net benefit); this study also suggested means by which biomarkers could identify these subgroups [61].

Importantly, these clinical trials were simulated in such a way that assumed that the proposed interventions behaved mechanistically exactly as had been hypothesized. Therefore, these in silico trials are a form of verification of the underlying hypotheses—either explicit or implicit—that formed the basis for such trials. The way in which these computational simulations were structured avoided the need to invoke factors such as heterogeneity of adjunctive therapy, different pharmacodynamics/kinetics, faulty randomization, or other potentially confounding practical issues commonly used to explain negative outcomes of clinical trials. In line with actual outcomes, and not surprisingly for those studies that were purely hypothetical, none of the simulated interventions demonstrated a beneficial effect [61, 71]. The conclusion drawn from these findings is that, most likely, the underlying conceptual models that informed the development of these therapeutic strategies targeted at blocking individual mediators were flawed, precisely because the hypotheses underlying their selection as therapeutic modalities were flawed in assuming a high likelihood of universal success. That is not to say that—despite this flaw of universal therapeutic efficacy—these mediator-directed therapies would fail. As noted above, one of the studies, an in silico trial of anti-TNF-α therapy using an equation-based model of systemic inflammation, suggested that this type of therapy would work on defined subsets of sepsis patients [61]. Thus, we suggest that flaws in the original hypotheses and assumptions underlying these failed clinical trials would have been exposed through the use of computational dynamic knowledge representation been available and used early and throughout the process of drug development .

As touched upon above, in silico clinical trials offer an unprecedented possibility to transcend the long list of practical limitations—including relatively small cohort sizes, limited availability of measurements, finite study durations, and the presence of confounding factors—that affect real-world clinical trials. However, the interdisciplinary team of clinicians, biologists, and computational modelers that carry out these in silico clinical trials must assure that the base models and implementation of simulated populations represent both the biology and clinical setting.

In addition to providing a check of the plausibility of the underlying scientific basis of a proposed intervention, in silico trials can augment the current process of performing clinical trials in three significant ways [85]:

  1. 1.

    Enhancement of study group substratification: The study by Clermont et al. [61] demonstrates the use of an in silico trial to enhance subgroup stratification and candidate patient identification. The finer-grained representation of each simulated patient, in terms of cytokine response trajectories, and how they respond to and without a proposed intervention allows the identification of potential biomarker-defined inclusion criteria for a clinical trial. In essence, this allows each simulated patient to act as his own control with respect to the proposed intervention. This type of analysis is functionally impossible to obtain in clinical trial cohorts that reflect the range of response that would arise in the general population. Furthermore, social or ethical factors that may limit the possible representation of specific groups (e.g., African-Americans, known to be generally under-represented in many clinical trials, or women of child-bearing age, excluded for potential teratogenic risk). As a result, trials are very likely to miss important (positive or negative) effects in subgroups that are sampled inadequately. This mis-sampling can lead to later discovery of adverse events following a promising clinical trial, or in the failure of truly useful treatments in clinical trials that were not properly targeted to the patients that would most benefit from them. By simulating massive virtual cohorts sampled from the space of potential patients, in silico clinical trials can achieve much more thorough sampling of possible patients. The acquisition and analysis of this simulation-generated data can in turn reveal clinical patient subgroups that merit particular attention, and lead to better informed patient selection criteria and more effective clinical trials.

  2. 2.

    Augmentation and optimization of protocol design: Protocols for modern interventions depend on multiple complex and often interacting parameters (e.g., dosage levels and timing and frequency of administration). Attempting to determine these parameters experimentally over a wide range of individuals is functionally impossible, and therefore the optimal intervention strategy for an individual patient cannot pragmatically be determined. The inability to anticipate and account for this degree of interindividual heterogeneity will doom a clinical trial to failure at the outset. In silico trials allow a more rigorous computational optimization of these parameters, both on massive populations and for individual patients, and will increase the precision with which protocols can be designed, and therapeutic endpoints defined.

  3. 3.

    Enhanced characterization of the control group: Clinical trials rely on control groups against which the effect of a proposed intervention is compared. However, given the vagaries of clinical practice, many control groups may actually compare poorly to the intervention group. Interindividual variability in both underlying biology and clinical practice leads to a situation where the definition of “similarity” between control and intervention patients is often quite crude and imprecise. This situation confounds the ability to truly define the effect of the proposed intervention. In silico trials, however, offer the ideal control group: each simulated patient can be simulated with and without the intervention. Comparison of results against these “perfect” controls thus removes a source of uncertainty that is unavoidable in real trials.

An example of the potential insights obtained from carrying out in silico trials can be seen in the aforementioned in silico trial based on an anti-TNF-α therapy [61]. These simulations recapitulated the general lack of efficacy of the intervention; however, the researchers used the power of computational modeling to evaluate what would have happened in the absence of intervention or in the setting of different doses of the drug. In essence, the placebo group was “cloned” into multiple treatment arms or the placebo arm. Consequently, this in silico analysis suggested specific characteristics of the simulated patients who had been helped by the intervention, had been harmed by the intervention, or had not been affected by the drug, thereby suggesting the possibility of using this in silico approach for deciding on inclusion and exclusion criteria for eventual clinical trials. Thus, the key take-home lesson of this study was that a failed randomized, placebo-controlled clinical trial could possibly have been successful through the use of in silico modeling.

Despite the tangible benefits of in silico trials in gaining insight into the potential efficacy of therapeutic interventions and why such proposed interventions might not be effective, it has been only recently that methods have been developed that can help determine what actually might be effective. Investigation into this challenge led to the perspective of addressing the treatment of sepsis as a control problem, where the goal of therapy is to “steer” a patient’s disordered inflammatory state back into a state of health. ABMs have been proposed as proxy systems to aid in the development of control strategies [86], and this has led to the use of both genetic algorithms/evolutionary computing to define the scope of the task of controlling sepsis [87], and the application of model-based Deep Reinforcement Learning to train an artificial intelligence (AI) agent to control sepsis [88]. The details of this work are presented in Chap. 5 of this book.

Dynamic Knowledge Representation at the Individual Level: Optimization of Diagnosis and Therapy

It may be argued that the ultimate test of dynamic knowledge representation is that of characterizing the drivers of dynamic patient state to a degree sufficient to identify and treat the individual patient [6, 56, 81, 82]. To do so, a robust, mechanistic computational model (presumably the same one used for in silico clinical trial) must be adapted to reflect the temporal dynamics of inflammation and organ damage/dysfunction in the individual patient [57]. From a practical standpoint, model parameters that alter the patient’s dynamics (e.g., comorbidities, prior health history, and relevant genetic traits) are modified over known or presumed ranges in accordance with known biology [56, 57]. The applications of this approach are myriad. Of most direct connection to the in silico clinical trial, individual-specific models could be used to generate much larger cohorts of virtual patients, which in turn could be used to make in silico clinical trials more realistic.

As an example of this approach, we constructed a multicompartment, equation-based model, consisting of the “tissue” (in which physical injury could take place), the “lungs” (which can experience dysfunction), and the “blood” (as a surrogate for the rest of the body) in order to simulate traumatic injury and subsequent inflammation and organ dysfunction [57]. This model was calibrated initially with data on approximately 30 individual trauma patients, all survivors of moderate blunt trauma. Based on these individual trajectories of both inflammatory and physiological variables, normal and uniform distributions were created. These distributions were sampled repeatedly to create a population of 10,000 virtual trauma patients, where each patient is defined by his/her parameter values in the mathematical model. Each patient was then subjected to simulated low, moderate, and severe trauma. These virtual populations of trauma patients exhibited realistic and partially overlapping distributions of “damage” recovery times [which we equated with intensive care unit (ICU) lengths of stay] and total “damage” (which we equate with degree of multiple organ dysfunction). These virtual patients were queried as to the parameters driving the above distributions and found that for patients with a low Injury Severity Score (ISS), parameters related to IL-1β were the predominant drivers, while IL-6 was the main driver of outcome in patients with moderate or severe ISS. However, while real patients could be segregated based on IL-6 single-nucleotide polymorphism into high- versus low-IL-6-producing subcohorts, and while tuning up IL-6 production in silico could turn virtual survivors into virtual nonsurvivors, the net effect in both virtual and real patients was negligible (demonstrating the difficulty in extrapolating linearly in complex diseases such as critical illness). Moreover, while only data from trauma survivors were used to calibrate the in silico trauma model, simulations of virtual populations predicted the appearance of approximately 4% nonsurvivors [57]. These predictions were in line with the actual mortality in this population [57, 89]. These results demonstrate the utility of mechanistic models with regard to predicting emergent phenomena, and suggest the possibility of determining novel basic mechanisms in trauma, of individualized outcome prediction for trauma patients, and of virtual clinical trials based on a small number of actual patients [57].

The aforementioned studies highlight some of the particular advantages that mechanistic models afford: virtual cohorts can be generated of any required size, and each individual patient’s disease state can be tracked at an extremely high level of resolution (limited only by the resolution of the model) for as long as required [57, 87, 90]. When information is available about the approximate distribution of these characteristics in real populations, this information can be used in the generation of a virtual patient population to ensure that the composition of simulated cohorts mirrors reality.

Another application of this approach involves in silico “testing” of multiple therapeutic modalities on individuals. As an example of this application of dynamic mechanistic modeling, an ABM of vocal fold inflammation and healing was calibrated to the early levels of inflammatory mediators present in the laryngeal secretions of individual humans subjected to experimental phonotrauma, and could predict the later levels of these mediators in an individual-specific fashion [79]. Importantly, these individualized ABMs were utilized to predict the likely efficacy of a “rehabilitative” treatment, namely resonant voice exercises, both in patients who had in fact received this treatment and in patients who did not [79]. A similar process could be employed to evaluate the specific efficacy of a drug modulating an aspect of inflammation or healing [57, 73], thereby forming the basis of a much more realistic in silico clinical trial.

Conclusions and Perspectives

What is clear now is that the biocomplexity of pathophysiological processes underlying the systems-level diseases that represent the greatest health risk today, such as cancer, diabetes, atherosclerosis, Alzheimer’s, sepsis, and wound healing, confounds the use of traditional experimental methods. These reductionist experiments and data-oriented descriptive methods are unable to evaluate and test multiscale causality, an essential and critical step in the design and development of therapeutic interventions for systems-level diseases. The complexity and dimensionality (in terms of multiple factors and variables) of these biomedical issues, particularly in terms of translating mechanisms across scales of organization, essentially precludes this approach. Reliance on only these traditional methods can produce, at best, “one-off” products based on fortuitous discovery, but does not provide a robust and sustainable strategy. The Scientific Method mandates that it is the ability to evaluate mechanisms and causality sufficiently in a multidimensional, high-throughput world—as is potentially possible with dynamic computational modeling and the application of principles from Translational Systems Biology—that forms the crux of the translational dilemma [19]. The use of dynamic computational modeling can provide a framework that allows the introduction of “theories” into biomedicine, in order to facilitate the translation of robust conceptual structures and architectures across experimental platforms as well as into the differences among individual patients [19, 79, 91]. Specifically, we assert that the computational approaches described in this chapter, with an explicit goal of addressing the challenges of implementing the last stage of getting a therapy to the bedside, represents a necessary step in the future of obtaining and implementing effective therapeutics for the complex diseases that challenge us today and in the future.