Keywords

Defining Our Terms

The term “personalized medicine” first came into use following the completion of the human genome project in 2003, when Dr. Leroy Hood coined the phrase to describe the use of individual genetic signatures to risk stratify individuals and therefore enact targeted preventative strategies for disease (Hood 2003). Since that time, other terms have come about, including P4 medicine (predictive, personalized, preventative, and participatory) (Hood and Friend 2011), precision medicine, and individualized medicine. All of these seek to prompt clinicians and researchers to acknowledge the heterogeneity in people and diseases themselves, and approach treatment accordingly (Agusti et al. 2015).

The terminology surrounding this concept has further nuance and variation that warrants clarification. Ideally, a “personalized” approach should allow for targeted treatments for groups with defined biological features, but how these groups are defined and studied varies in the literature. For instance, instead of “personalized” medicine, some advocate the use of the term “strata” to refer to groups of patients with similar features, thus resulting in the term “stratified” medicine (Hingorani et al. 2013). Others use the term “phenotype” to define populations based on a condition’s clinical presentation that can therefore be used as criteria for study enrollment.

When these groups have similar presentations but have subclinical differences, they can be further broken down into “endotypes,” a term that implies an understanding of biological mechanism for a presentation (Lotvall et al. 2011). In practice, however, the mechanism may not be known, so these subgroups may be referred to as subphenotypes (Bhatraju et al. 2016), but this term, too, has variable connotation, so the term “subtype” is occasionally used (Thompson et al. 2017). The purpose of these groupings is to help guide therapy and improve outcome, but the terms defining outcome also vary. The word “prognostic” implies that a given quality portends or is associated with a specific outcome. The term “predictive” is similar, but the outcome in question is more specifically related to drug or device responsiveness.

How Does Personalized Medicine Influence Critical Care?

The impact of personalized medicine on critical care can be seen in the following five areas: risk stratification, diagnosis, treatment plans and response, prognostication, and research.

Stratifying clinical risk has significant implications for triage and timing of intervention. For example, predicting which of our patients are at highest risk of developing Acute Respiratory Distress Syndrome (ARDS) or Acute Kidney Injury (AKI) may allow us to intervene upon those individuals sooner and in a more informed manner. Tools to make more accurate diagnoses (e.g. does an individual have sterile inflammation or a bacterial infection?) have immense implications for patient outcome and resource utilization. Tailoring treatment plans to those most likely to respond to a given intervention increases the impact of what we do and decreases unnecessary off-site effects; for example, we know that there are different responses to the commonly used sedative dexmedetomidine (Holliday et al. 2014) and variations in adverse events during use of vasopressin (Anantasit et al. 2014). Personalized medicine may help us prognosticate for critically ill patients, which is vital for care planning on a systems and individual level. Finally, Seymour et al. (2017) outlined three ways in which personalized medicine could impact research: (1) retrospective studies can uncover associations that are predictive or prognostic of a given outcome, (2) treatment response characteristics (e.g. determining patient subsets who might benefit from a particular drug) can guide trial enrollment strategies in order to enrich groups (Meurer et al. 2012), and (3) heterogeneity of treatment effect can be identified post-trial by determining which treatment strategy work better in some patients versus others (Iwashyna et al. 2015).

The promise of this approach seems substantial, and many areas of medicine have begun to harness these tools, particularly oncology. But critical care medicine has become increasingly protocol-driven, especially since the positive outcomes associated with early goal-directed therapy were seen in sepsis (Rivers et al. 2001). Indeed, there has been a push toward checklists and bundles to standardize care to a population, while personalized medicine would seek to individual care to a single patient or small group of patients. And while this approach has improved outcomes in many arenas, there are limitations to this strategy, and on a practical level, patients’ clinical course often requires clinicians to practice in an individualized fashion. Currently, this individualization may or may not be based on evidence and may vary greatly between different clinicians. Advances in personalized medicine will allow this to be implemented based on strong evidence and in a much higher-fidelity manner.

What Tools Do We Have to Bring Personalized Medicine to the Intensive Care Unit?

While the current utilization of personalized medicine in the intensive care unit (ICU) may be limited, the tools abound in the research space to improve this going forward. These include biomarkers, the use of large data (neural networks and artificial intelligence), and the various flavors of Omics. Application in specific disease states will be discussed later in the chapter.

A biomarker is a physiologic or molecular characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to therapeutic intervention (Biomarkers Definitions Working 2001). Biomarkers play a role in diagnosis, treatment monitoring, and outcome stratification, and they can serve as surrogate endpoints in clinical research (Sandquist and Wong 2014). They have been studied in numerous critical illness disease states, including ARDS, AKI, sepsis, and pulmonary embolism.

The sheer volume of genetic data that exists for analysis provides untold opportunity to further our understanding of critical illness disease states. In fact, some estimates suggest that since the Human Genome Project was completed, the volume of data has grown tenfold each year (Berger et al. 2013). The term “omics” describes the identification and use of specific genetic markers to improve fidelity of diagnosis and treatment. These markers include genes, single nucleotide polymorphisms (SNPs), proteins (cell signaling molecules, those that influence DNA expression, etc.), messenger RNA, and metabolites. Again, the role of this type of inquiry in furthering our understanding of specific disease states will be discussed later in the chapter, but this approach is also frequently utilized to further our understanding of drug therapy (both response and adverse effects), which is called pharmacogenomics.

As more ICUs begin to use electronic health records, physiologic and lab data is accruing automatically. Processing and analyzing this mass of data may require leveraging neural networks and artificial intelligence systems that can appropriately mine and process this information. While these methods may not provide mechanistic information, they can help provide real-time feedback about data as it is generated such that study models can adapt continuously.

Current Status of Personalized Medicine in Critical Care

Though admission to the intensive care unit occurs due to a variety of disease processes, morbidity and mortality in ICU is largely related to organ system failure. Despite various process improvement strategies geared to improving outcomes in the ICU, mortality still remains high. The leading cause of death in the ICU is from multi-organ failure, which is the final common pathway for various etiologies causing critical illness (Waydhas et al. 1992; Orban et al. 2017). Among critically ill patients, the incidence of mortality is 15–28% when more than one organ system fails (Elias et al. 2015).

The underlying pathophysiological heterogeneity of various clinically similar diagnoses in the ICU calls for personalized medicine to improve outcomes. With current technological advancements we are able to leverage the whole spectrum of data from various omics-based approaches to real time artificial intelligence-based decision making to institute targeted therapy in ICU. Below we describe the current status of various precision medicine approaches to different clinical questions and common scenarios encountered in an ICU.

Pharmacogenomics in the ICU

Pharmacogenomics is actively used in the oncology space, and approximately 10% of all drugs have pharmacogenetic information and/or recommendations included in their product labeling (Hamburg and Collins 2010; Allen and Gelot 2014). Much of the existing work in this space has been in elucidating the SNPs of the cytochrome P-450 (CYP) enzymes, which are central to the metabolism, absorption, distribution, and elimination of many drugs. Applications in the ICU setting remain limited, likely because of the impact of critical illness on basic pharmacokinetic and pharmacodynamic variables—like volume status and organ function—as well as significant polypharmacy in this population. There are, however, a number of medications commonly utilized in the ICU for which there are known SNPs that affect drug function. These include opioids (genetic variations impact pharmacokinetics/pharmacodynamics), clopidogrel (CYP2C19 polymorphisms affect levels of platelet inhibition), coumadin (CYP2C and VKORC1, a vitamin K-related gene) (Bodin et al. 2005), succinylcholine (butyrylcholinesterase) (McGuire et al. 1989) and procainamide (N-acetyltransferase-2) (Hamdy et al. 2003).

Two commonly utilized drugs in the ICU that have been examined in this manner are the sedative dexmedetomidine and vasopressin. Dexmedetomidine is an alpha-2 agonist commonly used for sedation, but it can also be associated with hypo- and hypertension depending on the dose and speed of administration. Clinically, however, there is a wide variability in susceptibility to sedative effects as well as in blood pressure response. Holliday et al. (2014) examined the existing studies looking at the effect of alpha-2 adrenoreceptor polymorphisms, CYP2A6 (encodes the enzyme responsible for metabolism of dexmedetomidine), and the uridine diphosphate gluconosyltransferase genes responsible for non-CYP metabolism, on sedation and blood pressure effects. Only one study demonstrated a positive result, which demonstrated that a specific allele of the ADRA2A alpha-2 adrenoreceptor gene reduced efficacy and increased time to effect after receiving dexmedetomidine, however, the result has not yet been replicated (Esteller 2008).

Vasopressin has also been studied, where SNPs in genes related to its action site were investigated. Although the result has not been replicated, the authors found that serious adverse events were associated with a presence of an SNP near AVPR1a, a vasopressin receptor gene (Anantasit et al. 2014).

Can We Predict Individuals Who Are More Susceptible to ARDS or Are at Higher Risk of Adverse Outcomes?

Risk factors associated with ARDS development and its severity has been studied using various omics approaches including candidate gene association studies, genome wide association studies, and whole-exome sequencing (Reilly et al. 2017).

Candidate gene association studies investigate the association of ARDS risk with a set of a priori selected genes, which are known to be linked to biological mechanism of ARDS. These studies have identified several genes associated with ARDS susceptibility and outcome. Some notable examples are, a functional variant in the gene encoding for angiotensin converting enzyme, ACE I/D polymorphism genotype, has been associated with a higher mortality risk (Matsuda et al. 2012). An SNP in the advanced glycosylation end-product specific receptor (AGER) gene (encoding a marker of pulmonary epithelial injury) confers a higher risk for developing ARDS as well as a higher mortality among patients with ARDS. Susceptible patients also have a higher plasma concentration of sRAGE (soluble receptor advanced glycosylation end-product), a measurable biomarker (Jabaudon et al. 2018). Two functional polymorphisms of IL17 has been associated with significant risk and prognosis in ARDS (Xie et al. 2019). On the other hand, some candidate genes have been found to be protective. A genetic variant of the leucine-rich repeat-containing 16A (LRRC16A) gene, which has a role in platelet formation, reduces ARDS risk (Wei et al. 2015). Studies have revealed at least 90 candidate genes associated with ARDS, but its relevance has been questioned due to lack of reproducibility and difficulties in interpretation (Hernandez-Beeftink et al. 2019).

Only two genome wide association studies (GWAS) have evaluated ARDS susceptibility, one evaluating trauma related ARDS and the other evaluating all-cause ARDS (Christie et al. 2012; Bime et al. 2018). Although no locus achieved genome wide significance, marginal significance was seen in more than 150 loci. Both studies were able to reveal previously unknown ARDS susceptibility genes. Other investigators have used Mendelian randomization analysis, which explores the genetic variability in intermediate features of ARDS, which might have a causal relationship with ARDS. In addition to risk stratification, identification of such causal relationships may also help develop individual specific therapeutic targets. For instance, Mendelian randomization analysis showed that plasma levels of plasma angiopoietin-2, a biomarker of endothelial permeability and activation, was strongly associated with ARDS (Wada et al. 2013). Additionally, genetic variation in the angiopoietin-2 gene (ANGPT2) has been linked to risk of ARDS (Su et al. 2009). Pharmacological modification of angiopoietin-2 levels or its signaling may bring us close to using precision medicine for prevention or treatment of ARDS (Reilly et al. 2018).

In addition to omics methods, clinical prediction scores exist for quantifying ARDS risk. The most widely studied among them is the Lung Injury Prediction Score, which was derived from a cohort of more than 5000 patients who were at risk of acute lung injury at hospital admission, and among whom 7% developed acute lung injury (Gajic et al. 2011). The input to the score consists of a variety of variables including co-morbidities and presenting diagnosis. Prediction scores can be used with biomarkers to improve predictive ability. After studying various plasma biomarkers of ARDS, Xu and colleagues concluded that angiopoietin-2 plasma levels markedly enhanced the ability of Lung Injury Prediction Score to predict ARDS (Xu et al. 2018).

Can Ventilation Strategies Be Individualized to Patients?

Though key advances in lung protective ventilation and resuscitation has improved mortality from ARDS, the morbidity and mortality associated with ARDS remains substantial (Phua et al. 2009). The landmark ARDS net trial demonstrated that ventilating patients with a low tidal volume (4–6 ml/kg of predicted body weight) improved mortality, when compared to a higher tidal volume strategy (Acute Respiratory Distress Syndrome Network et al. 2000). No subsequent ventilation strategies have consistently reduced mortality in ARDS.

Mechanical ventilation could worsen lung injury in ARDS as well as among patients with normal lungs. A heterogenous portion of the lung remains collapsed or atelectatic in ARDS, and contributes to secondary inflammation and lung injury. Presence of atelectasis during mechanical ventilation causes lung injury in at least two ways, (1) dynamic recruitment and de-recruitment with each breath causing dynamic strain, and (2) stress concentration which occurs between open and collapsed alveoli (Nieman et al. 2017). It is well known that application of positive end expiratory pressure (PEEP) reduces atelectasis, improves lung compliance and potentially reduce lung injury, however, the amount of PEEP which is most beneficial is unclear. Various studies have evaluated a low versus high PEEP strategy and has found neither to be superior (Briel et al. 2010). However, the emerging concept is that rather than one strategy fits all, the amount of PEEP applied needs to be personalized based on patient characteristics and type of lung injury.

Various strategies are currently being explored to personalize PEEP. Traditionally, optimal PEEP has been decided by what leads to best oxygenation, or best compliance. Setting PEEP based on lung compliance (i.e., setting PEEP which results in the highest compliance) logically makes sense, especially since compliance is predictive of mortality (Amato et al. 2015). Another strategy is to choose the minimum PEEP needed to keep open all recruitable portions of the lung, and hence minimize atelectasis. This can be achieved by various methods. Volumetric capnography allows calculation of dead space, and hence allows personalizing PEEP in each patient to minimize atelectasis (Suarez-Sipmann et al. 2014). This is valuable, since increased dead space is independently associated with mortality in ARDS (Cepkova et al. 2007). More recently, investigators have used bedside imaging modalities like lung ultrasound to monitor atelectasis and thereby titrate PEEP. A novel bedside device, electrical impedance tomography, has become available which allows measurement of regional variations in lung ventilation at bedside (Bikker et al. 2010). Animal studies have shown that electrical impedance tomography guided ventilation improved regional and global lung compliance and reduced lung injury as shown in histopathology (Wolf et al. 2013). Another approach to minimize atelectasis is to maintain PEEP marginally above the transpulmonary pressure (airway opening pressure minus pleural pressure) (Talmor et al. 2008). Pleural pressure is estimated via esophageal manometry. Although various strategies are available to personalize PEEP, the best strategy remains unknown, and needs further evaluation.

Is There Evidence for Different Endotypes in ARDS and Can Treatment Be Personalized Based on These Endotypes?

Very few strategies, other than low tidal volume ventilation, has shown to improve outcome in patients with ARDS. A likely explanation is that most ARDS trials have applied strategies to the whole ARDS cohort, and it is possible that some of these strategies might be better effective in certain subset of ARDS patients. This is supported by the fact that ARDS is a heterogenous disease and tailoring therapy to endotypes may facilitate therapeutic discovery.

Calfee et al. has used latent class analysis to identify subgroups among patients with ARDS. Latent class analysis uses mixture modelling to identify the best fitting model based on a set of variables, assuming that the sample contains various unknown groups. It explores presence of subgroups within a cohort defined by a similar combination of baseline variables. Using clinical and biological variables available from previously published ARDS studies, Calfee et al. (2014) identified two subgoups: a hyperinflammatory phenotype characterized by higher levels of inflammatory biomarkers, shock, metabolic acidosis, and mortality, and a second, non-hyperinflammatory phenotype. These clinical phenotypes were also found to be stable over a period of time (Delucchi et al. 2018).

As expected, these two phenotypes respond differently to therapeutic strategies. In a cohort of 1000 patients with ARDS, Famous et al. (2017) found that a restrictive fluid management strategy reduced mortality in the hyperinflammatory group, while increased mortality in the non-hyperinflammatory group. Similarly, statins improved survival in the hyperinflammatory phenotype (Calfee et al. 2018). To facilitate easier identification in a clinical setting, Famous et al. (2017) found that measuring serum concentrations of interleukin-8, bicarbonate, and tumor necrosis factor receptor-1 can be used to identify the hyperinflammatory sub-phenotype with excellent accuracy.

Can We Predict Susceptibility and Survival of Sepsis?

Sepsis is a heterogenous disease which is influenced by various factors including immune status, genetic predisposition, pathogen type, and extend of infection. Genetic variation may influence the risk of disease and its clinical evolution. Such variations have been studied in an attempt to develop novel personalized therapeutic strategies. Candidate gene association studies of sepsis susceptibility and outcome has looked at a variety of target genes. For instance, certain toll like receptor-1 polymorphisms have been associated with organ dysfunction, proinflammatory responses and sepsis outcome (Wurfel et al. 2008; Pino-Yanes et al. 2010; Thompson et al. 2014). However, the results of these studies have often been inconsistent and not reproducible in different populations, potentially because the populations studied have been small and heterogenous (Clark and Baudouin 2006).

GWAS could circumvent some of these limitation by performing a relatively unbiased evaluation of genomic risk. Until now, three genome wide association studies have been published in sepsis (Man et al. 2013; Rautanen et al. 2015; Scherag et al. 2016). Rautanen and colleagues used GWAS to explore association of 6 million SNPs with 28-day mortality after sepsis from community acquired pneumonia. Among the 11 loci identified, an SNP in the intronic region of FER gene within chromosome 5 was consistently associated with mortality in all examined cohorts (Rautanen et al. 2015). Scherag and colleagues also evaluated 28-day mortality in adult patients with sepsis. They found 14 loci, none of which overlapped with the loci found by Rautanen, or with FER gene (Scherag et al. 2016). This lack of reproducibility reduces clinical utility, however, studying more homogeneous populations and larger sample sizes may provide clinically useful associations.

Another promising approach to predict development of sepsis, is by using high-resolution vital signs data and electronic medical record data. This has been used to develop clinician decision support tools which can identify patients at highest risk of future sepsis (Nemati et al. 2018). Development of these tools harness complex artificial intelligence based machine learning techniques including deep learning using long short-term memory neural networks (Saqib et al. 2018). A recent meta-analysis found that machine learning models outperform traditional sepsis scoring system such as sequential organ failure assessment score (Islam et al. 2019).

What Is the Role of Metagenomics in Sepsis?

Metagenomics is the study of genetic material recovered directly from an environment. It has been used to study the microbiome, i.e., the genetic material of all microbes living inside or on the human body. Intestinal microbiome forms a complex ecosystem and is involved in a wide array of functions including production of hormones and host immunity. Changes in diversity and quantity of gut microbiota, called dysbiosis, alters host immunity to pathogens and may increase susceptibility to infections and sepsis. In addition to intestinal microbiome, the respiratory microbiome is also markedly altered in patients with sepsis (Lee and Banerjee 2020). Better understanding of the relationship between human microbiome and host immunity will help develop strategies to favorably modulate microbiota, and develop a personalized therapy for individuals with dysbiosis.

Modulation of gut microbiota can be achieved by administration of a pool of microbes normally found in gut (probiotics), and/or by improving the intestinal microenvironment using agents which favor growth of normal gut microbiota (prebiotics) (Haak et al. 2018). A meta-analysis of 30 studies evaluating the effect of probiotics in critically ill patients showed significant reduction in infections including ventilator associated pneumonia, but no effect of mortality or length of stay (Manzanares et al. 2016). A randomized controlled trial of 4500 infants in rural India found that administering a combination of Lactobacillus plantarum (probiotic) and fructooligosaccharide (prebiotic) reduced mortality and incidence of sepsis (Panigrahi et al. 2017). Fecal microbiota transplantation is another strategy to treat dysbiosis, and is now commonly used to treat recurrent Clostridium difficile infection. Regulating human microbiota in individuals with dysbiosis is an emerging area of research, and may have potential to reduce the incidence of sepsis, improve immediate outcomes and reduce long term mortality after sepsis.

Can We Differentiate Sepsis from Non-infectious Inflammation?

Sepsis is defined as a life-threatening organ dysfunction caused by a dysregulated host response to infection. Hence, a diagnosis of sepsis has two components, presence of organ dysfunction and suspicion or presence of infection. The presence of organ dysfunction is easily quantified based on the numeric sequential organ failure assessment score. However, tools for differentiating infection from non-infectious inflammation are limited, and is largely based on clinical suspicion. Current gold standard for identification of infection is microbial growth of a pathogen in culture media. However, culture based techniques suffer from various limitations including long time to diagnosis, and poor sensitivity. Each hour delay in administration of effective antimicrobial therapy is associated with a measurable increase in mortality in various studies. This has led to inadvertent antimicrobial therapy causing inadvertent toxicity, increased cost, and growth of antimicrobial resistance.

Development of a biomarker to diagnose sepsis with precision will help target delivery of antimicrobial agents to only individuals with infection. Over 150 protein and cytokine biomarkers has been studied in the context of sepsis. Procalcitonin is by far the most commonly studied, has been shown to be specific for bacterial infection in various patient populations. A recent meta-analysis shows a pooled sensitivity of 0.77 and pooled specificity of 0.79 for procalcitonin to differentiate infection from non-infectious inflammation (Wacker et al. 2013). Other biomarkers such as C-reactive protein and interleukin-6 are equally elevated in both infection and non-infectious inflammation, limiting their utility.

Recently, there has been interest in utilizing systems biology based approaches to identify differences in host transcription response between infection and non-infectious inflammation (van Engelen et al. 2018). Use of RNA molecules as biomarkers has the advantage of them being incorporated into polymerase chain reaction based bedside testing, making them attractive for integration into rapid clinical decision making. Three diagnostic RNA marker panels which have been studied in this context, specifically, sepsis meta score (Sweeney et al. 2015), septicyte lab (McHugh et al. 2015) and FAIM3:PLAC8 (Scicluna et al. 2015), with promising results. Sepsis meta score and septicyte lab consists of a panel of 11 and 4 gene transcription products respectively, and has been developed to diagnose sepsis. While FAIM3:PLAC8 gene expression ratio was developed to diagnose community acquired pneumonia. These tools await further trials before they can be incorporated into sepsis diagnostic algorithms.

Can We Personalize Antibiotic Regimens in Sepsis?

Choosing an effective antibiotic regimen as well as administering it early in the course of disease markedly improves sepsis survival (Rhodes et al. 2017). Culture based methods remains the gold standard and are most widely used to identify the type of microbial infection (viral versus bacterial) and presence of antimicrobial resistance. However, these take days to report and have a high rate of false negative results.

Identifying the pathogen early on will favor a more effective and personalized approach to choosing antimicrobial agents. Bacterial and viral infections lead to different host immune responses. Based on the concept that bacterial and viral agents generate different responses in the host, Tsalik and colleagues identified transcriptomic biomarkers to differentiate between bacterial and viral agents causing an acute respiratory illness (Tsalik et al. 2016). This was validated in publicly available genomic databases showing high sensitivity and specificity (AUC > 0.9). Several rapid molecular pathogen specific diagnostic tools have been developed for early identification of the causative microbial agent. Pathogen specific assays are not sufficient by themselves due to large number of pathogens which can cause sepsis. However, they can be used to rule out (or confirm) certain infections like malaria and dengue. Various multiplex polymerase chain reaction based tests are available which can detect a predetermined array of bacteria and fungi. Some of them allow for detection of certain resistance genes as well. For instance, Staphylococcus aureus acquires methicillin resistance by insertion of mecA gene into its chromosome, which can be detected by polymerase chain reaction based techniques (Wang et al. 2013). It is also being used to identify certain genotypes of vancomycin resistant enterococcus, specifically VanA, and VanB (Seo et al. 2011). Majority of these tests provide no information on antimicrobial susceptibility. These tests can be used directly on clinical specimens or after its enrichment in a culture medium. However, direct detection has many disadvantages including false positive rate, contamination, and interference with human DNA. Hence, rapid molecular diagnostic tests has some role in early detection and pathogen identification in sepsis, but in their current status they complement rather than replace microbial culture data (Rello et al. 2018).

Is There Evidence for Endotypes in Sepsis and Instituting Endotype Specific Treatment Strategies?

Various investigators have used transcriptomics to identify various sepsis endotypes, which might have therapeutic and prognostic implications. Wong and colleagues identified two subgroups (A and B) among pediatric septic shock patients, based on a 100-gene expression panel representing adaptive immunity and glucocorticoid receptor signaling pathways. Subgroup A, where these genes are down regulated, has worse clinical outcomes (Wong et al. 2015). Using predictive enrichment strategies, they were able to show that subgroup B is more likely to benefit from corticosteroids (Wong et al. 2016). This could pave way to conducting future trials evaluating effect of corticosteroids in sepsis, in an enriched cohort.

Similarly, Davenport and colleagues used a similar approach to identified two transcriptomic signatures, sepsis response signature 1 and 2, among critically ill patients with community acquired pneumonia. Surprisingly, there was no difference in expression of proinflammatory cytokine genes between the two groups. Sepsis response signature 1 was associated was a much higher mortality (Davenport et al. 2016). Sepsis endotypes represented by sepsis response signature 1 and 2 has also been replicated patients who developed sepsis from peritonitis (Burnham et al. 2017). Predictive enrichment based on these endotypes may be useful in developing treatment strategies which work on a specific subgroup.

Can Fluid Management Be Personalized Among Patients with Septic Shock?

The surviving sepsis guidelines suggest initial resuscitation with 30 ml/kg of crystalloids within 3 hours of presentation (Rhodes et al. 2017). In the past, guidelines advocated for early goal directed fluid therapy based on trial by Rivers et al. in 2003 (Rivers et al. 2001). Subsequently, three randomized trials showed no improvement in outcomes with early goal directed fluid therapy (ARISE Investigators et al. 2014; Pro et al. 2014; Mouncey et al. 2015). In fact, a meta-analysis of these three trials showed that broad protocolized approaches like early goal directed therapy leads to larger fluid administration, a higher ICU admission rate and increased ICU resource utilization (Angus et al. 2015). Subsequently, the latest surviving sepsis guidelines from 2016 recommend a more personalized approach to fluid therapy. They recommend that fluid resuscitation (after the initial bolus) be based on dynamic indices of volume responsiveness, measured in each patient (Rhodes et al. 2017). These include stroke volume variation and pulse pressure variation, but such indices are not reliable in patients who breath spontaneously or has arrhythmias. Further clinical trials are needed to identify optimal strategy to personalize fluid therapy in septic or distributive shock.

Can We Predict Who Will Develop Acute Kidney Injury?

AKI refers to a spectrum of renal dysfunction, ranging from minor dysfunction to the need for replacement therapy. AKI is immensely important and common in the critical care setting. Variability in definition of AKI has rendered exact incidence difficult to determine, but it has been reported as being between 22% and two thirds of all ICU patients (Hoste and Kellum 2006), and numerous studies have demonstrated an association between AKI and adverse outcomes including mortality.

At present, our ability to predict the development, severity and impact of AKI on each patient is limited. Serum creatinine and urine output are the most commonly used metrics for assessing renal function. The limitation of creatinine, however, is that it is a surrogate for glomerular function and does not provide information about tubular function; its rise may also lag days behind actual insult, so intervention may begin after significant tubular damage (Koyner and Parikh 2013). Given these limitations, biomarkers that can detect injury early on can help institute treatment strategies personalized to that individual.

Neutrophil gelatinase-associated lipocalin (NGAL)—also known as siderocalin or lipocalin-2—is a molecule that scavenges pericellular labile iron released from organelles after an ischemic or toxic insult, which may help attenuate oxidative stress during injury. It is expressed in multiple types of epithelia throughout the body, including renal tubular cells, and appears to be substantially upregulated in AKI. It has been shown to detect subclinical AKI both prior to a rise in creatinine and without a rise in creatinine (Haase et al. 2011). NGAL is available for use in routine care in many institutions, allowing for earlier identification of injury. Other molecular biomarkers include enzymes, proinflammatory mediators, structural proteins that are released during tubular damage, markers of glomerular filtration that are reabsorbed by functional tubular epithelium, hormonal markers, and proteins involved in cell cycle regulation (Malhotra and Siew 2017). While studies have identified scores of potential target molecules, which has in turn furthered our understanding of the pathophysiology of kidney injury, their use has not yet impacted clinical outcomes.

Another approach is to identify tests that can give clinicians information about functional organ reserve, similar to stress testing in cardiology (Ronco and Chawla 2016). Ronco’s group proposed the use of fixed protein loads for investigating a kidney’s ability to increase glomerular filtration rate when faced with stress (Sharma et al. 2016). In addition, Chawla et al. developed a “furosemide stress test,” that when used in patients with early AKI was able to identify progression to AKI Network Stage III injury with an area under the curve of 0.87 (Chawla et al. 2013).

Two research groups have recently harnessed machine learning using electronic health record data to develop models to predict the development of AKI using data points that are standardly collected during the course of care. From a discovery cohort of 70,000 inpatients, Koyner et al. developed an algorithm for predicting kidney injury. Their model had a sensitivity of 84% and a specificity of 85% for stage 2 AKI and was able to predict it at a median of 41 h prior to patients meeting diagnostic criteria (Koyner et al. 2018). Tomasnev et al. utilized a deep learning approach to analyze over 700,000 adult patients across 172 inpatient and 1062 outpatient sites to create a predictive model for AKI. They were able to predict 55.8% of all inpatient episodes of AKI and 90.2% of those injuries that would require renal replacement therapy with up to 48 h of lead time over clinical manifestation of AKI. They were also able to generate relevant clinical features to support alerts and provide prediction for lab test trajectory (Tomasev et al. 2019).

Can We Predict Who Will Develop Delirium?

ICU delirium is challenging for patients and caregivers alike, and it has been linked to prolonged admission (McCusker et al. 2003; Aitken et al. 2017), higher rates of readmission (Bokeriia et al. 2009), lower quality of life, increased mortality (Aitken et al. 2017), and worse long-term cognitive function (Marcantonio et al. 2000). Estimates of incidence range from 10 to 90% depending on the factors leading to admission (Maldonado 2008; Devlin et al. 2018). This range may be partially explained by the fact that many patients experience hypoactive symptoms and therefore may not be identified.

The CAM-ICU (Confusion Assessment Method for the Intensive Care Unit) is the most ubiquitous tool to diagnose delirium in current clinical practice; it has been well-validated in large studies, and carries a sensitivity and specificity of approaching 100% (Ely et al. 2001). Early identification of patients at risk for delirium will help direct delirium prevention strategies to patients who might benefit from it. Several delirium prediction tools, including the PRE-DELIRIC (PREdiction of DELIRium in ICu patients) and Lanzhou models has been studied in large cohorts, with AUCs of 0.78 and 0.77, respectively, for predicting delirium (Green et al. 2019). PRE-DELIRIC calculates risk based on age, APACHE-II score, coma status, surgical/medical/trauma/neurology/neurosurgical admission type, infection, metabolic acidosis, degree of opioid consumption, sedative use, serum urea, and presence of urgent admission (Linkaite et al. 2018). The Lanzhou model incorporates age, APACHE-II score, coma, emergency operation, mechanical ventilation, multiple trauma, metabolic acidosis, history of hypertension, delirium and dementia, and use of dexmedetomidine (Chen et al. 2017).

However, these tools do not provide significant mechanistic insight into delirium. Various investigators have studied biomarkers in delirium to help guide risk assessment, diagnosis, monitoring, and treatment. An exhaustive review by Toft et al. lists twenty biomarkers that are associated with or can detect delirium. These include IL-6, cortisol, prolactin, amyloid, neopterin, metalloproteinase-9 (MMP-9), neutrophil-lymphocyte ratio (NLR), phenylalanine-tyrosine ratio, thioredoxin, serpin family A member 3 (SERPINA3), and 8-iso-prostaglandin F2-alpha, many others. Despite these associations, they concluded that they are not useful in in current clinical practice. However, they propose that the commonly utilized inflammatory and metabolism biomarkers could be evaluated especially for screening and diagnosis of hypoactive delirium, which can be more difficult to diagnose (Toft et al. 2019).

Next Steps for Research

As we identify biomarkers and gene targets involved in critical illness, we will next need to determine which markers are clinically useful—both alone and in combination with other markers—to help with risk stratification, aid in diagnosis, and inform prognostication. Once that is known, high-quality tests that result rapidly and can therefore be used in a clinical setting will need to be developed.

This work requires obtaining, manipulating, and interpreting vast amounts of information on a scale and pace that traditional methods of inquiry and experimentation may not be equipped to handle. Accordingly, study and trial design may well need to innovate in both approach and structure to make this science happen. This era of research will necessarily be multidisciplinary and will require significant collaboration across the preclinical, clinical, and implementation science realms.

In the preclinical space, large cohort studies will be required to find targets for further investigation and testing; this will necessitate large patient samples as well as substantial processing power. Innovation in the clinical trial space may need to be more substantial. Trials in a critically ill population are difficult at baseline, as the population is rarely homogenous and tends to have many comorbid conditions, making individual interventions difficult to isolate or randomize. Traditionally structured clinical trials are addressing this challenge by recruiting enriched trial populations, which will ideally compensate for decreased statistical power with increased treatment effect size. One such study that used this approach was the MON-ARCS trial, which examined the TNF-alpha monoclonal antibody fragment afelimomab in septic shock patients. They specifically targeted patients with high baseline IL-6 levels, who they postulated would have the most effect from this intervention. They were able to show an outcome difference within this patient subset, although the effect size was not significant in the overall study population (Panacek et al. 2004).

We will discuss two novel trial designs that may help address some of these issues: adaptive platform trials and registry-based randomized controlled trials. Adaptive platform trials involve study protocols that allow for the simultaneous evaluation of multiple treatments within a study population; they harness Bayesian analysis in order to preferentially “randomize” patients to treatments with higher likelihood of effect, thus automatically enriching the treatment group. The “platform” denotes the structural framework that allows this type of study; the platform includes one control arm that allows simultaneous comparison of all other treatment arms. These trials also allow progression between traditional phases of trials based on preset rules within the study design, meaning that there does not need to be arbitrary stretches of time between phase 2 and phase 3 study of a given target (Berry et al. 2015). The I-SPY2 trial is a prime example of this type of study design that has allowed for the rapid identification and approval of a targeted biologic agent for breast cancer treatment. Since its inception in 2010, the trial has investigated twelve therapies in eight biomarker-defined subtypes, and it was able to send neratanib (a tyrosine kinase inhibitor) to phase 3 evaluation with less than 200 patients enrolled due to the adaptive trial design (Park et al. 2016).

Another subgroup of platform trials that utilizes electronic health records to evaluate existing therapies is known REMAP (randomized, embedded, multifactorial, adaptive, platform) studies (Angus 2015). These studies leverage electronic health records to screen for patients who may be eligible and then randomize patients to potential therapies; these studies can also harness adaptive trial design and therefore enrich study populations based on what the trial learns. This strategy is currently being used to investigate treatment approaches for severe pneumonia (REMAP CAP), with funding support from the European Union Platform for European Preparedness Against Re-emerging Epidemics (PREPARE network) as well as the governments of Australia and New Zealand. These collaborative networks are also investigating antibiotics, ventilator strategies, and immunomodulation across over 100 ICUs in Europe (Seymour et al. 2017).

Another approach that harnesses existing data is the registry-based randomized controlled trials, which analyzes data collected for other purposes to answer novel questions (Lauer and D’Agostino 2013). In many parts of the world, large scale data is routinely collected. For instance, nearly all of the 90 hospitals with ICUs in the Netherlands send data to a central database known as the Dutch National Intensive Care Evaluation (NICE) registry; consequently, patient-level information on over half a million ICU admissions can be accessed and analyzed (van de Klundert et al. 2015). Similarly, Australia and New Zealand’s Intensive Care Society Adult Patient Database similarly collects data from over two million admissions across 90% of the ICUs from both countries (Stow et al. 2006). These efforts exist in nearly twenty countries at present, and the International Forum for Acute Care Trialists (InFACT) reflects the possibility of collaborative registries in the future (InFACT Global H1N1 Collaboration 2010). From this large-scale data, electronic surveillance systems (also known as ESSs or “sniffers”) can be implemented to identify appropriate patients for trials in real time. This has been actualized in the form of the METRIC Data Mart at the Mayo Clinic, which is a system that automatically receives electronic health records data and has been used to screen patients with AKI (Kashani and Herasevich 2013) and sepsis (Herasevich et al. 2011) and notify trial personnel to approach them for study enrollment in real time.

These processes are ambitious, and to be effective in the clinical space, implementation science must keep pace as well. Perhaps by leveraging existing information frameworks (such as electronic health records), systems that can identify appropriate patients, prompt clinicians to consider targeted therapies, and measure relevant markers of progress and outcome in real time will need to develop and then become part of routine clinical practice (Seymour et al. 2017).

Challenges for Implementation

The future is bright for precision medicine in critical care, but the implementation challenges are real and must be thoroughly understood so that they can be addressed appropriately. First and foremost, translating the findings from basic science research into usable tools that can be implemented with fidelity in critically ill patients with a host of comorbidities is a challenge at present and is likely to continue to pose difficulties. Once these tools exist, they also need to produce usable results on a short time frame that makes sense in an ICU environment. While the technology associated with sequencing and biomarker identification will get cheaper over time, the development of new diagnostic tests and therapeutics—which may need to be marketed to smaller subsets of patients—could confer additional financial burden.

These efforts require collaborative efforts across the globe to standardize measurements and reporting so that when data is shared, we can do so meaningfully (Dzau and Ginsburg 2016). Data storage and security policies will also be necessary to manage this information, in addition to the costs associated with storing, securing, and processing data. As discussed earlier in the chapter, analysis will likely require leveraging artificial intelligence and neural networks, given that we can ask increasingly complex questions, and the computational power required to mine this information will be immense. Furthermore, should we begin to harness the information being generated in ICUs with computerized monitoring and record systems, these requirements will only grow. Electronic health records data, however, does pose additional challenges. Currently, regulatory barriers to utilizing what is generated and recorded in the medical record are high, and the quality of information can be variable, as much of it still relies on human input. In addition, the various electronic health records nation and world-wide do not have standardized ways of representing, storing, or searching for information (Maslove et al. 2016).

What Does the ICU of the Future Look Like?

Personalized medicine in a critical care setting is rife with challenges, but a more targeted approach to ICU patients is certainly possible, and it is likely what is best for many of our patients. We envision a world in which well-developed biomarker tools can be utilized upon patient presentation to aid in diagnosis, risk stratification, and initiation of appropriate care. Upon determining the clinical entity that a patient is facing, targeted tools to monitor response to therapy as well as their organ function (e.g. kidneys, cognitive system) can allow clinicians to adjust course where needed. Decision support tools will be integrated into electronic health records to help clinicians keep up-to-date with the latest data and recommendations. These individualized interventions will likely work hand in hand with checklists and care bundles, which will have their role in ensuring that overall quality of care remains at a high standard. All the while, robust data systems will be seamlessly integrated into clinical care, such that researchers in collaborative groups are able to learn in real time from every patient we see, so that we can continuously iterate the care we provide.