I am an expert, and I need help. After 10 years of training and 25 years of practice, seeing thousands of patients, attending and presenting scientific and educational lectures, and conducting research on biological mechanisms and clinical trials of treatments for a range of neurocritical care conditions, I am probably doing about as much as I can do, and I am close to being as good as I am going to be. There are perhaps 2,000 of us neurocritical care experts scattered across the globe, albeit concentrated in large urban areas with universities and large hospital systems. Contrast this with the estimated 15 million cases of stroke, 27 million cases of traumatic brain injury, and 1.1 million cases of status epilepticus annually worldwide; although all of these patients do not require neurocritical care, you can see the problem. I also have a problem in the way I care for patients. I aspire to treat proactively, anticipating the problems that my neurocritically ill patients may develop and considering diagnostic tests and interventions that will prevent worsening. But inevitably, I treat reactively, waiting until the neurological examination has worsened to proceed with surgery, allowing the intracranial pressure to become elevated prior to treating brain edema, or picking a group-based threshold for a physiological parameter, such as blood pressure or partial pressure of arterial carbon dioxide, and hoping that it is right for my specific patient. I have a suspicion that the data are already there, but I am not making full use of all the information. AI is usually referred to as “artificial intelligence,” but I am changing this for this context to “augmented intelligence,” for the real purpose of big data analytics and AI in our setting is not to replace medical providers but to recapitulate and enhance expertise.

This is not the first time this has happened. I remember being a resident walking into the long-ago decorated angiography suite at our county hospital and admiring the remarkable giant posters and wallpaper that adorned every wall. From floor to ceiling, there were diagrams of arteries and veins showing how their displacement gave clues as to the location of a brain tumor or a traumatic hemorrhage. It was months before a very senior neuroradiologist explained to me that this was how they diagnosed these lesions until the early 1970s, requiring a special angiodiagnostic skill set complemented by pneumoencephalography expertise that only a very few possessed. Assuming that was as good as it gets, many were satisfied with the status quo. And then came computed tomography (CT). In his Nobel Prize speech in 1979 describing his development of the CT scanner, Godfrey Hounsfield stated “when I investigated the advantages over conventional X-ray techniques however, it became apparent that the conventional methods were not making full use of all the information the X-rays could give” [1]. Neuroradiologists did not go out of style but rather dramatically enhanced their expertise; we can show understandable images to our patients and families, even those without deep medical knowledge; new ways to apply machine learning to neuroimaging are being actively developed [2]; and the wallpaper has changed.

There is absolutely no doubt that in the last three decades, we have made tremendous advances in the organization and delivery of neurocritical care and inroads into identifying and testing interventions to help our patients. Guidelines for the management of numerous neurocritical care conditions exist and the most rigorous of these critically evaluate the existing peer-reviewed published medical evidence and provide recommendations and levels of evidence based on this evaluation, often leaving recommendations absent if the medical evidence does not support them. The highest, but not the only, evidence comes from randomized clinical trials, and therein lies the problem. Absent a clinical trial for a specific question, some guidelines refrain from a treatment recommendation and most from the highest level of them. Clinical trials that test “one size fits all” approaches may trade a benefit (or harm) in individual patients for the goal of testing generalizability in a large heterogenous population of patients with a common overall condition, such as severe traumatic brain injury or spontaneous intracerebral hemorrhage. I admit to finding the idolatry of clinical trials peculiar when they really are just a scientific tool to help us understand the biology of disease, such as a polymerase chain reaction machine or a mass spectrometer. We now find ourselves in the situation where expertise is even more needed and precious in interpreting how guidelines and clinical trial results apply to the patient in front of us. This is not surprising. David Sackett, considered by many as the founder of evidence-based medicine (EBM), stated that EBM “means integrating individual clinical expertise with the best available external clinical evidence from systematic research” [3]. Expertise is a necessity in the practice of EBM. Yet, systematic research has largely focused on generating external clinical evidence while leaving training and experience to generate clinical expertise.

How do I practice now? What I do not do is step away from the bedside and just provide a set of patient care orders for others (usually nurses and respiratory therapists) to follow similarly to a cookbook recipe. Rather, I generally start from a standardized order set derived as much as possible from existing evidence-based guidelines—and from consensus when none exist—and then reassess, tweak, test, and adjust. A computer science colleague from Berkeley rounded with us for several days in the neurocritical care unit and concluded that we practice dynamic Bayesian network state transition theory in managing our patients [4]. I responded that I did not think so. I told him that what I actually do is the following: know what my patient looked like yesterday, know how many similar patients have done previously based on my experience, know the medical literature, assess my patient clinically on rounds, look at large amounts of physiological, neuroimaging, and text-based data from numerous sources (such as monitors, scans, and notes), decide what to keep and what to ignore, decide how likely it is that my patient is going to get sicker over the coming hours to days, and implement new diagnostic tests or interventions in an attempt to avert that deterioration and limit ongoing injury. Exactly. We already practice big data analytics, it is just that we are aggregating, integrating, and analyzing the data in the heads of clinicians with varying degrees of expertise. This is the problem and reason we need guidelines and have performed neurocritical clinical trials in the way we have so far. We need guardrails because we do not know how to individualize therapy rationally and successfully to consistently improve the outcome of the patient in front of us as opposed to a group of patients with seemingly similar general characteristics. In other words, guidelines and “generalizable” clinical trials lead us to (presumably) maximize positive outcomes in a group of patients but not necessarily for our specific patient.

Moving to a precision medicine approach for individual patients will require a change in technology and a change in culture [5]. The idea of a dose–response relationship between depth and severity of a physiological event, such as elevated intracranial pressure or low blood pressure, is reasonably well accepted but not reported in most current purportedly advanced standard electronic medical records. Assessing this along with its impact on patient outcome will require technological integration that is probably currently available but not implemented [6]. Predictive modeling that integrates large amounts of data to provide a “forecast” of potential for future events, such as elevated intracranial pressure, hematoma expansion, or clinical neurological deterioration, is, of course, even more complex. In addition, our inherent desire for accuracy in diagnosis may work against us. Every day, we rely on the weather forecast and accept uncertainty, especially when predicting events that are more temporally remote. Shifting our culture and expectations for acute medical events to allow predictions that forecast probabilities but not perfection will be necessary.

Barriers exist. One conundrum is that we cannot even test whether these advanced analytics are useful without implementing advanced data systems to capture, align, integrate, and analyze the information we wish to study. Incentives will be necessary and, although the most ethically sound and altruistic incentive is to improve patient care, someone is probably going to have to either make or save money for neurocritical care big data analytics and AI to move forward in a meaningful way. I am an expert, but I am starting to get old, and maybe a bit tired. It is time that we take lessons from the world around us regarding smart phones, wayfinding, and weather forecasting, as well as lessons of past medical successes, such as the CT scan, and take the plunge into big data and AI. This is likely the only viable way to truly harness and export expertise to the extent necessary to treat the volume of patients worldwide with neurocritical care conditions deserving of our attention.