Keywords

Shortcomings and Challenges in Drug Discovery

The pharmaceutical industry is currently facing unparalleled challenges to develop innovative new drugs. Although the yearly number of new drugs remained substantially unchanged, research and development (R&D) investment per drug is escalating at a marked rate. The estimated cost of developing a new drug is approximately $1 billion [1]. This phenomenon, the increase in R&D investment without the corresponding increase in the number of new drug approval, is known as the “innovation gap” [2]. After the Thalidomide [3] and Vioxx [4] incidents, regulatory bodies throughout the world are demanding more safety data, which in turn increases the development costs. However, this is only the tip of the iceberg of an even worst situation.

Response to Health Problem

While an impressive result – chiefly in terms of reduced mortality – has been witnessed in the last six decades as for the death rate for cardiovascular, cerebrovascular and infective diseases – no proportional benefits have been recorded in cancer cure rates (Fig. 1) [5,6,7]. Moreover, the burden of drug-insensitive infectious diseases, is increasing [8], whereas diabetes, metabolic and degenerative diseases, autoimmune and allergic pathologies are reaching an epidemic profile [9, 10]. Overall, those data strongly evidence that many health problems, besides the astonishing progress earned in several fields, are still waiting a satisfactory answer [11]. Focusing on cancer, for instance, it is nowadays widely believed that “the $105 billion spent on cancer research since President Richard Nixon declared the war on cancer in 1971 has been poorly spent. At the same time, considering that age standardized death rates from adult cancers in America today are only very modestly lower now than they were in the early 1970s, this is no great success story” [12]. A thoughtful and threatening example has been provided in the last years by antibiotics [13]. Indeed, the flow of new classes of antibiotic has substantially declined at a time when resistance rates [14] and new problems have increased significantly [13].

Fig. 1
figure 1

Mortality rates (years 1950–2008) for infectious, heart, cerebrovascular diseases (From American Cancer Society (ACS) 2010 Cancer Facts & Figures; Atlanta, USA, 2014, modified)

The aforementioned considerations should be put in correlation with the increasing negative attitude of a growing number of people that perceives modern medicine as a substantial failure. Indeed, since the 90s, the use of natural/alternative drugs steadily increased in western countries, while American patients paid more visits to alternative health practitioners (425 million) than to primary-care physicians (388 million visits) [15]. In US conventional medicine had lost over half of the market share for primary health services to so-called snake-oil vendors. The report revealed that patients were visiting alternative doctors for ten top health problems: back pain, anxiety, headache, sprains or strains, insomnia, depression, arthritis, digestive problems, high blood pressure and allergies [16]. Regardless of any other consideration, we should ask ourselves why in western countries so many people rely on (frequently unproven) “alternative” medical supports [17]. Moreover, it is a matter of concern that patients referring to non-conventional medicaments have a higher level of education than those who do not use them [15]. Thereby, we are legitimate in asking, “if the scientific message that alternative therapies don’t work is so “loud and clear,” why do so many people, physicians included, use them?” [18].

Industry Productivity

Contrary to the expectations, pharmaceutical innovation has led to a decline in industry productivity, a phenomenon noticed since the early eighties [19]. Despite the increased investment in R&D by the industry, the number of new molecular entities (NME) achieving marketing authorization is not increasing. Over the past 20 years, the number of Investigational new drugs approved by regulatory agencies did not augmented as predicted, nor quality control level, safety assessment, and identification of new molecular targets was improved. Therefore, high investment, development of new technologies and conceptual approaches – likes “-omics” methods, including transcriptomics, proteomics and genomics – have neither reduced the R&D risk, nor have enhanced efficiency [20].

By now it is widely recognized that Big Pharma challenges include: (1) R&D spending is growing faster than sales growth, (2) drug discovery is lagging relative to industry growth needs, (3) increased presence of large molecules in big pharma’s pipeline, (4) increasing need for in-licensing products and technologies, and (5) blockbuster drugs are going off patent (approximately 40% of patents owned by top 20 pharmaceutical companies are set to expire during 2009–2013). Efficacy and safety issues are actually the main causes of failure at the stage of phase III, given that two out of three among new proposed drugs are currently discarded for their side-effects [17]. Moreover, sixty-six of the 100 greatest companies studied by Herper [21], will launch only one drug this decade. The costs absorbed by these companies can be taken as a rough estimate of what it takes to develop a single drug. The median cost per drug for these singletons was $350 million, but for companies with more drugs approved, the cost per drug went up – until it hit $5.5 billion for companies that have brought to market between eight to 13 drugs over a decade [21].

As a result, as the main driver for its growth is innovation, biomedicine R&D is becoming increasingly challenged due to lower productivity and thus pharmaceutical companies have opened their R&D organizations to external innovation [22]. On this respect, it is noticeably that dynamics of NMEs release into the market markedly diverges among large and medium companies, reaching high efficiency almost only in medium-little companies. The decline in the output of large companies has been mostly driven by the diminishing number of large pharmaceutical companies, which has decreased by 50% over the past 20 years [20].

The Reason Behind the Failure

Three drug-discovery fads have driven the industry’s R&D programs in the past 20 years computer aided drug design, combinational chemistry linked to high throughput screening and genomics. As a result, current trends in biomedical research have led to a decreased emphasis on the physiology-driven approach, supporting “a trend toward qualitative rather than quantitative science, with the implicit assumption that all targets represent a viable starting point for drug discovery efforts” [23]. No doubts that this approach will likely focus on non-essential targets, thus producing more failures through lack of efficacy. Furthermore, analysis of structure–activity relationship pattern evolution, drug–target network topology and literature mining studies, all indicate that more than 80% of the new drugs rely on previously discovered targets and strive in affecting – perhaps in a different/subtle way – the same network [24]. Sadly, “there are no evidence that any of these is or will be capable of replacing the old techniques” [25]. Consequently, the most fruitful basis for the discovery of a new drug – still today – is to start with an old drug [26], besides a renewed interest for natural compounds and complex herbal mixtures has been noticed in the last 20 years [27, 28].

The fundamental problem may not be technological, environmental or even scientific but rather “philosophical”. There are pivotal issues with the core assumptions that frame our approach to drug discovery. In fact, the increase in the rate of drugs failing in late-stage clinical development has been concurrent with the dominance of the assumption that the goal of drug discovery is to design exquisitely selective ligands that act on a single target. This philosophy – the ‘one gene, one drug, one disease’ paradigm – arose from the congruence between genetic reductionism and new molecular biology technologies that enabled the isolation and characterization of individual ‘disease-causing’ genes. “Genetic determinism and reductionism emerge as significant research traps and a chasm-like separation might arise between molecular medicine and the sick patient. Furthermore, the newly added ‘translational research’ and ‘functional genomics’ cannot remedy this dichotomy” [29]. In fact, we should look at genes in a very different perspective, i.e. by embracing the global dynamics of networks that will reveal higher-order, collective behavior of the interacting genes [30].

These basic premises shaped the way upon which pharmacological strategies have been designed and developed in very recent times.

However, contrary to outlooks, pharmacological treatments established in the last 40 years did not achieve the expected success [31]. This state of affairs stands in stark contrast to prior decades of achievement in which a classical physiological approach led to identify effective treatments for several ‘simple’ diseases – like infections – for which at least a well-recognizable, dominant causative factor has been previously identified. Complex diseases – cancer, degenerative illness – have proven much less tractable and results provided by the combining interplay between epidemiological association and gene-based investigations led to contradictory outcomes. Findings proven to be hardly replicated, when not contradicted by new studies, undermining risk estimates that are highly variable or inconsistent or upon meta-analyses converge on little or no effect, and so forth [32].

Basic Paradigms in Pharmacodynamics

It is widely accepted that, fundamentally, drugs act by (a) mimicking or inhibiting normal biochemical processes, or inhibiting pathological processes in animals; or, (b) through inhibition of vital processes of microbial organisms. These effects are usually believed to be mediated by specific chemical interactions (i.e., covalent interactions) involving the pharmacological molecule and a “receptive” biological structure, to which we generally refer to as “receptor” or, broadly speaking, a “target” molecule [33]. This theoretical framework can be traced back to the seminal work of Paul Ehrlich who demonstrated that the selective and active uptake of pharmacological compounds must be dependent on the chemical binding of drugs to intracellular target molecules [34]. Currently, we recognize almost four types of target molecule for drug binding: receptors, as those classically involved in the transduction of endocrine effects (including both membrane and intracellular receptors), enzymes, ion channels (upon the membranes of both cells and organelles) and transport proteins. The interaction between the drug (the “ligand”) and the target triggers an intertwined series of events, which eventually leads to the (desired) biological effect. Depending on the characteristics of the transduction machinery, a different time lag (spanning from milliseconds to hours) is required for activating a complex cascade of molecular events supporting the biological response (enzyme inhibition/activation, opening of ion channels, release of intracellular messengers, modification of gene expression, among others). Ligands fall into two main classes: agonists and antagonists. The former bind to the target molecule and promotes its activity through conformational changes, while the antagonist occupies the affinity side of the receptor without producing any conformational modification, thus preventing the activation of the target.

Hence, pharmacodynamics is usually considered a matter of ligand/receptor interactions following the equation, L + R ⇌ LR, basically dependent on the law of mass action at equilibrium.

This picture is, however, a very simplistic one and fails in explaining a number of consolidated evidences. Indeed, despite the fact that the ligand/receptor model of pharmacodynamics introduced by Fisher [35] has been experimentally vindicated and extensively studied by mathematical modelling [36], several issues needs to be addressed in order to afford some controversial results [37].

First, when the drug–receptor interaction involves feedbacks, the system becomes more complex, displaying emergent properties, non-linearity, and even chaotic behavior. These features are dependent on many factors including drug bioavailability and environmental constraints, thus inextricably linking pharmacodynamics with pharmacokinetics [38].

Second, drug dissolution, transport, and uptake are heterogeneous processes since they take place at interfaces of different phases, i.e., liquid–solid and liquid– membrane boundaries, where diffusion is regulated by physical and topological constraints [39]. Overall, those factors belong to the biological (morphogenetic) field and contribute in providing a fractal-like structure of the medium in which drug activity occurs. This aspect is usually overlooked, notwithstanding it could significantly influence drug efficacy by modulating the kinetic of the reaction. Indeed, kinetic orders in some cases reflect the fractal dimension of the physical surface on which the reaction occurs and every factor that modify the fractal topology of the medium can eventually inhibit or alternatively foster the pharmacological result [40].

Third, the active site of protein receptors is a rather flexible structure and it is continuously “reshaped” when it interacts with substrates or drugs. Since the sixties, it was then proposed that the reaction between a receptor and its ligand could involve an “induced fit” mechanism [41], where the active site undergoes “conformational changes”. Conformational selection postulates that all the potential conformations of a given protein preexist and that once the ligand selects the most favored conformation, induced fit occurs and conformational change takes place [42, 43]. Overall, these results support the hypothesis that the molecules of water filling the active site of a protein, and surrounding the ligand, are as important as the contact interactions between the protein and the ligand for biomolecular recognition, and in determining the thermodynamics of binding. Conversely, any solute able to modify the solvation around the receptor could modify – in principle – the kinetics of the drug/receptor complex in a very unpredictable way. It is noteworthy that solutes and low-molecular weight compounds that can modify the configuration of water around enzymes and their substrates can also significantly influence the mechanical lock-and-key picture [44]. Indeed, in some binding processes occurring in aqueous solutions, the involvement of hydrophilic effects, as well as the biophysical constraints provided by the specific state of the solution [37], might be so profound that the lock-and-key model becomes irrelevant, then modifying thoroughly the way one approaches the problem of drug design [45]. Moreover, the displacement of free-energetically unfavorable water [46] or the presence of solute molecules, may substantially affect binding processes given that the major part of the Gibbs energy of binding could be due to interactions mediated through the solvent molecules [47].

Fourth, several compounds that display medical/biological effects do not act trough conventional pharmacodynamics, i.e. through establishing covalent bonds with their putative targets. This means that these substances exert non-canonical chemical interactions (belonging to the so-called supramolecular chemistry) [48], and a number of physically-mediated effects, including enhanced solubilization/absorption of other active factors [49, 50], physical disruption/distortion of cell membranes [51, 52], osmolarity properties [53], physical modulation of microtubule aggregation [54], physical distortion of biological fibers [55], protein surface binding modifications [56], physical sequestering/chelating effects on calcium/hydroxyapatite ions [57], changes in ionic strength, pH and surface tension [58].

Furthermore, it can be surmised that some active compound can modify the biological field [59], by acting through subtle modulation of its physical strength, i.e. via quantum effects on enzyme dynamics and on protein structure. Quantum tunneling effects on enzyme activity [60] and quantum-dependent coherent remodeling of cytoskeleton proteins [61, 62] have been noticed and there is some evidence suggesting that this happens in vitro following natural compounds treatments [63]. Finally, a number of drug mechanisms are still unknown (or only barely known), notwithstanding the drug still “functions”. It is noteworthy that this class of compounds includes relevant drugs like Methocarbamol, Paracetamol, Phenytoin, PRL-8-53, Metformin, Thalidomide, Acamprosate, Armodafinil, Cyclobenzaprine, Demeclocycline, Fabomotizole, Lithium, and Meprobamate [64]. Overall, the relevance of those, still unexplored, mechanisms is underestimated, and grossly misunderstood, especially when mixtures of active compounds (like herbal formulas) are considered.

Limits of the Current Reductionist Paradigm

Besides the relevant achievements performed in the last 50 years, the above-sketched model of pharmacological activity not only has left aside some not negligible mechanisms of action, but ultimately it has shaped the philosophical underpinnings on which the current pharmacological research is rooted. By this way, the entire scientific inquiry, including its methodological models, became mainly focused on some well-known mechanisms, chiefly confined within the molecular level of interaction, considered as the privileged level of causality, i.e. the place where the pharmacological activity starts.

Focusing to the drug-receptor interaction has consequently driven the search for parameter efficacy by mostly considering factors that can influence the dynamics of this association: affinity (i.e., the reciprocal of the dissociation constant of the drug/receptor complex), efficacy (the ability to induce a molecular response) and potency (the lowest concentration at which the ligand elicits a response). This approach has been proven useful in explaining a relevant body of data – especially when dealing with hormone-receptors-mediated effects – but itFootnote 1 underestimates non-classical pharmacodynamics effects and restricts the recognition of response parameters by only considering those directly related to the ligand/receptor complex. Pleiotropic drugs effects – i.e., those exerted on different molecular targets or at levels higher that the molecular one – are generally left unnoticed. In other words, drug’s efficacy became a matter of “molecular effect”, thus disregarding the physiological effect(s) that for centuries has been the cornerstone for estimating the effectiveness of a cure. This approach has led to embarrassing outcomes, given that molecular effects are sometimes “translated” in an unpredictable way to the higher levels (cells/organs). Furthermore, by assuming a reductionist stance in appreciating drug’s effectiveness has finally contribute to forget the true aim of every treatment: the well-being of the patients and the improvement in life expectancy we are expecting from that therapy. This conundrum has been specifically outlined in clinical management of tumors, where the classical approach was almost focused to establish the “objective tumor response rate” (ORR), i.e. the observable changes in tumor size/number.

This happened despite the fact that, since the eighties, it already became apparent that a proper estimation of drug efficacy should have taken into consideration the parameters belonging to the overall system, i.e. the patients. Indeed, Food and Drug Administration (FDA) has recommended that cancer drug estimation should be based on more direct evidence of clinical benefit, such as improvement in survival, improvement in a patient’s quality of life, improved physical functioning, or improved tumor-related symptoms. These benefits may not always be predicted by, or correlate with, ORR [65]. This example raises several key questions that deserve to be discussed in detail: what is the disease? How to identify the target(s) for planning a proper treatment? And, finally, how to set the endpoints of a treatment?

Disease as a Controversial Concept

Constructivism Versus Naturalism

Shortcomings of current therapies, especially in some well-defined areas of medical inquiry (degenerative and neoplastic diseases), call into question the concept of human disease on which curative attempts rely for establishing a treatment strategy. During the last two centuries, the debate dealing with this subject has mostly polarized between the two alternative approaches represented by Constructivism and Naturalism [66]. The former, albeit hardly definable, essentially denies the naturalist thesis that disease necessarily encompasses bodily malfunction. Indeed, the principal Constructivism claim consists in that “disease judgments appeal to biological processes that are to be understood in terms of human practices rather than membership in some biologically definable class of abnormalities or malfunctions” [67]. Ultimately, “malfunction is not a necessary condition for disease”, as even bodily malfunctions cannot be identified independently of human values and thus fall into the “normative” class of judgements [68]. Constructivists advocate that medical concepts (not only the definition of disease) should be “reformed” in order to reframe the meaning and the perception of ordinary events – including “disease” – by considering that our life is prominently shaped by our thoughts. This approach would finally end in delivering people from avoidable suffering, mostly to be ascribed to prejudices or cultural/societal misconceptions [69].Footnote 2 This framework was eagerly adopted in a few domains, as in psychiatry [70]. However, it is quite clear that it can hardly accommodate with the daily experience of “true” organic diseases, like cancer. Instead, the Naturalistic system [71] conceives the human body as constituted by several sub-systems (organs and apparatus) that have natural functions from which they can depart in many ways, some of which are harmless and then deserve to be recognized as “diseases”. This definition implies that recognizing a disease requires identifying both an altered functioning of some organism’s apparatus and the resulting harmful effect on health. This definition represents a step forward the definition introduced in the XVII century by Sydenham [72], who identified the disease as a collection of symptoms (representing the “disease phenotype”), which should be ascribed to a unique “pathogenic” cause. In modern times, the “pathogenic factor” has been chiefly ascribed to external agents (microbes, toxins, iatrogenic factors) or to gene networks [73], as their deregulation can consequently lead to the perturbation of many biochemical/metabolic pathways downstream.

Overall, controversies among these two different approaches, probably reflect also the lack of a compelling theory of living organism [74]. As suggested by Lemoine [75], philosophic inquiry should focus to establishing a naturalistic-based concept of disease, by recognizing the basic theoretical premises of the medical science, and then looking for perspicuous accounts of different disease types within such framework. This approach will allows considering diseases as putative natural kinds, while leaving open the possibility that some diagnoses represent contingent historical outcomes.

From the naturalistic perspective – by far the only to which the daily medical practice actually relies – a disease necessarily involves biological malfunction, which either can or cannot be perceived as such, i.e. by complaining symptoms (some conditions, like hypertension, are equated to “disease” even if they neither produce symptoms nor are associated to malfunctions).

However, in the last 30 years, models of human diseases have been mostly “reduced” to the “malfunctioning” of a few, critical pathways. Consequently, drug discovery has been dominated by reductionism aiming to identify drugs that activate or inhibit specific molecular targets.

A target is usually a single gene or molecular mechanism that has been identified on the basis of genetic analysis or biological observations. Genetic targets represent genes or gene products that are deregulated or carry mutations. Mechanistic targets constitute enzymes or biochemical pathways for which a specific or broad malfunctioning has been put in causative correlation with the observed medical illness.

However, biology is complex, and it is increasingly clear that even discrete biological functions – or malfunction in case of a disease – can rarely be attributed to an individual molecular factor like a protein or a gene. As a result, approaches based on such a simplistic reductionist paradigm often showed either unforeseen toxicity or lack of efficacy when tested in clinical trials [76]. This breakdown stems, basically, from three (unsubstantiated) theoretical premises: (1) all diseases have a single (dominant) underlying cause; (2) disease features – signs and symptoms, i.e. the disease phenotype – are correlated with the causative factor in a linear fashion; (3) removal/correction of the underlying, putative “cause” will restore the healthy condition.

Unfortunately, evidence exists that all three assumptions are wrong [77]. Furthermore, the assumption that each illness is sustained by a specific “disease”, i.e. mechanistically induced by quantitative changes in the molecular/physiological phenotype of the living system, still should be demonstrated beyond any doubt in several conditions (especially in psychiatric disorders). This unproven assumption has led to the “medicalization” of a wide range of conditions perceived as anomalous, besides any demonstrable disease process could be ascertained. Yet, perception of the relevance of symptoms, recognized as such, i.e. as belonging to a “disease state”, is the result of cognitive process, highly influenced by cultural conditioning and societal/scientific model of illness [78]. Indeed, notions of health are highly context dependent, as human diseases only exist in relation to people, and people live in varied cultural contexts. Moreover, new clinical “entities”, barely identifiable according to current medical rules, are often welcomed primarily as opportunities for market growth, the lack of compelling evidence notwithstanding [79]. This is especially true when we are dealing with “preventive medicine”. Are presumptive markers of a “future” disease condition reliable enough to ask for a “preventive cure”? Namely, is someone with a genetic predisposition to an illness already sick? While no effective guarantee exists that a genetic predisposition or a biochemical anomaly will unavoidably lead to an overt disease, it is instead questionless that being aware of the probability of the (future) occurrence of such a threatening disease may be so traumatic to trigger a major psychological distress. As a result, quite sad to notice that, a number of new diseases have been ‘created’ simply to fit the ability to diagnose them and for opening new avenues in the drugs market [80].

The above sketched examples highlight that the model of disease, i.e. the conceptual meaning of a so widespread used category, is only rarely explicitly debated or defined in the scientific literature [81], besides still being the subject to extensive philosophical debate [82]. The model that dominated until the first half of the past century mostly originates from Virchow’s conclusion that all diseases result from cellular abnormalities [83]. Since the discovery of the double helix in the 50s, however, this model was relentlessly superseded by an even more reductionist approach, as that provided by the New Genetics. According to this theoretical approach, every disease can be traced back to the malfunctioning of a discrete number of genes and, at least in principle, every change in gene activity can lead to a pathological state [84]. In the same time, the clinical entity (the “disease phenotype”) was recognized by the association between the set of signs/symptoms and circulating/tissue markers, conceived to be “mechanistically” linked to the hypothesized pathogenic mechanism. Identification of disease according to these rules allowed clinicians in establishing a disease taxonomy that was instrumental in performing medical practice up to the current time. However, this approach has progressively shown shortcomings that reflect a lack of specificity (i.e., inability in defining disease unequivocally), as well as a lack of sensitivity (i.e., incapacity in recognizing preclinical, true causative state of disease). Ultimately, this model proven to be confounding, as it often posits wrong correlations between the disease-associated biological parameters (usually identified only when illness reach a “stable-state”) and the alleged causative processes, thereby prejudicing efficient treatment strategies.

These limitations can principally be ascribed to the reductionist approach on which medical research has grounded its agenda. Increased awareness of such inadequacies, prompted for revisiting the concept of human disease during the last decades, striving to conjugate advantages offered by the molecular/reductionist stance with the opportunities of a physiological/systems-based framework [85]. Namely, such considerations apply when we refer to the target-based drug discovery framework that has largely replaced the traditional physiology-based approach, since the completion of the Human Genome Project [86]. Conclusively, the main outcome of that program, was philosophical, as it prompted to consider that every disease can be singled out by identifying a set of few genetic/biochemical targets. Development of the so called ‘omic’ technologies led to a more sophisticated reconnaissance of those targets, shifting the focus on their complex (eventually non-linear) interactions [39], however without questioning the basic assumption on which the disease model has been built.

Simple Diseases Are Not Simple, Indeed

Reassessing meaning and boundaries of human disease should be considered a main task that became even more complicated by the development of ‘omics technologies during the post-genomic era, rather than solved. In fact, the attempt to bring back the definition of a disease to its genetic determinants, along with the triggered mechanistic pathways downstream gene modulation, has emptied of meaning the classical genetic approach, highlighting how even simple monogenic disorders are supported by a net of causative factors that, ultimately, are responsible of the disease phenotype. As an example, sickle cell anemia, a classic monogenic disorder due to a single point mutation, turned out to be a very complex condition characterized by several different clinical features [87]. Indeed, the pathogenesis of this classic Mendelian disease shows a bewildering intricacy, which ultimately ends up into different (almost six) disease phenotypes, for each of which a diverse treatment strategy is needed. The mandatory conclusion is that, even in monogenic disease, “the genotype simply cannot invariably predict the phenotype of patients with the disease”, as extensively discussed by Loscalzo et al. [88]. As a second example, let us consider Epstein Barr virus (EBV) related diseases. EBV is one of the five recognized human herpesviruses, the others being herpes simplex virus types 1 and 2, cytomegalovirus, and varicella-zoster virus [89]. Undoubtedly, B cells are the primary targets of EBV infection [90], and infection of B cells leads to the expression of a limited set of viral gene products, which drive the cells into proliferation. Usually, in healthy inhabitant of Western countries, proliferation of infected B cells is limited by CD8+ and CD4+ T cells, thus leading to the onset of infectious mononucleosis. On the contrary, in children living in malaria endemic regions of the world (i.e., equatorial Africa, Brazil, and Papua New Guinea), EBV infection is more likely to induce Burkitt’s lymphoma, a tumor of the lymphoreticular system. For a while, explaining the ways in which a single agent can evoke such different responses in different hosts has represented a challenging task [91]. It is now widely recognized that a critical factor that can switch the outcome from a mild form of influenza-like syndrome to an aggressive lymphoma is the host response to EBV infection. Indeed, a wide array of immunocompromised conditions (endemic malaria infection or immune-deficiency syndrome like HIV infection) are recognized triggering the shift from a transient lymphoproliferative reaction to a true, malignant transformation [92]. It is startling that EBV infection is currently known as the main “causative” factor of a number of disparate diseases, including pharyngeal carcinomas [93], gastric cancer [94], leiomyosarcoma, undifferentiated type I nasopharyngeal cancer [95], as well as non-malignant illness, such as the childhood disorders of Alice in Wonderland Syndrome [96], systemic lupus erythematosus [97], and acute cerebellar ataxia [98]. Those examples highlight how misleading can be the hypothesized link between a putative causal factor (the viral infection) and the associated disease, as the same “causal” factor can sustain very different pathogenic phenotypes. Instead, lesson learned from the EBV-related illness shows that the disease is the unpredictable outcome emerging from the complex interaction between “stressing” (rather than “causal”) conditions and the host reactivity [99].

Conversely, as advocated by the case of familial pulmonary arterial hypertension [100], or hypertrophic cardiomyopathy [101], a common disease phenotype may be sustained by many different genotypes yielding it. Namely, hypertrophic cardiomyopathy is associated with mutations in several different genes that code for different sarcomeric (myosin heavy chain, myosin light chain, tropomyosin, and troponin C) and non-sarcomeric proteins [100]. In this case, a common pathophenotype is “produced” by very different “genetic” disease. The aforementioned examples highpoint that the molecular determinants – and especially the genetic “defects” – thought to be the cause of a disease, simply cannot predict the phenotype of patients with the disease, which ultimately emerges as a result of a complex interplay among different factors. These factors entail different levels – from the cell to the organism in its wholeness – and they cannot be “reduced” only to the molecular tier. Therefore, characterizing disease by establishing a nosology almost substantially based on putative molecular determinants has shown significant shortcomings.

Mechanistic/Genetic Versus the Physiological Approach

The above-mentioned issues are clearly epitomized by the so-called target-based medicine that constitutes the pre-requisite on which the personalized medicine relies. However, despite astonishing claims, it has already been noticed that the decline in drug discovery just coincided – to a large extent with the introduction of target-based drug discovery [102].

In oncology, the possibility of finding so-called synthetic-lethal drug targets, which are only essential in cancer cells that carry mutations in specific tumor suppressor genes, is attractive in theory [103]. However, the search for such genes – if they exist, as normal tissues has been proven to carry the same mutated genes as their malignant counterpart [104] – might be frustrated by tumor heterogeneity [105], given that such heterogeneity, arising by a hierarchical pattern of stem-cell divisions, yields a mosaic of different cells and, ultimately, can hamper cancer treatment [106]. This is why seemingly identical cells respond differently to treatments, given that phenotypic and genotypic differences provide differentiated response by activating even opposite outcomes in cell behavior and ultimately escaping the drug-induced inhibition on specific targets [107].

In addition, for three main reasons a genetic approach is unlikely to be a solution to common diseases in the near future. The first is the great importance of environmental circumstances in determining health, the second reason is the great complexity of gene/gene, gene/environment interactions, and the third reason is the high individual variability [108]. Conclusively, despite having identified “hundreds of common variants whose allele frequencies are statistically correlated with various illnesses and traits, the vast majority [of those studies] have no established biological relevance to disease or clinical utility for prognosis or treatment” [109].

Contrary to previous expectations, nearly all the genes associated with diseases are non-essential genes [110]. This means that they do not encode hub proteins and are localized in the functional periphery of the entire network. Mutations involving central, essential genes are likely to induce severe impairment of pivotal physiologic functions, hence leading to increased lethality during the early developmental stage or in the extra uterine life. As such, from an evolutionary point of view, such genes are under higher selective pressure that would ultimately end in deleting them from the population. It is a matter of probability that only “less-threatening” mutations – as those affecting “peripheral genes” – can be preserved within a population from such selection. Therefore, even the physiological relevance of such mutations is concurrently abridged.

In the early 1980s, the development and broad implementation of molecular methods, as well as the later development of genomics, significantly increased our understanding of the individual actors participating in cellular processes, ultimately providing a prodigious list of molecular factors. That exhausting collection of data facilitates the reductionist strategy in drug discovery by converging on (presumed) targets and designing compounds to interfere with them. Inevitably, this approach removed the targets from their physiological context to study them at quasi-atomic level and focused on the optimization of the target-compound cross talk, placing an almost exclusive emphasis upon the drug-target interaction parameters (i.e., binding affinity and target selectivity) [111]. Therefore, “the criteria to evaluate the potential of a novel molecule shifted from a strict physiological observation of the results obtained with the assayed compounds to a molecular one, where the best lead chemicals were those displaying a strong binding with the target protein and a good specificity profile (i.e. binding to only one target)” [112].

A target is usually defined as a single gene, gene product or molecular mechanism that has been identified as a putative causative factor in disease pathogenesis. A target-based drug would be a compound that selectively modulates the activity of the disease associated gene or mechanism, without directly involving other pathways.

These conditions are fulfilled only in a very few cases, in which the ailment must be attributed in a predominant manner to a genetic mutation or to a specific biochemical mechanism. Second, the causative factor must contribute to the disease process at the time of treatment. The first condition applies only for few diseases, as the more common illness have a multifactorial origin.

The second condition deserve special attention, as the mechanism/gene responsible for the onset of the illness might have exerted its action during early pathogenic steps and could no longer be active during the steady state of the disease, when diagnosis is usually reached. Some developmental-based diseases like schizophrenia [113], mental illness, or depression (for which no direct relationship between target and therapeutic effect has been so far evidenced) [114], falls within this category, as well as some cancers that lose their mutated, “driver” oncogenes before to progress [115, 116]. Indeed, as far as cancer treatment is concerned, it is widely recognized that ‘inactivation’ or inhibited expression of oncogenes (like BCR-ABL1, c-Myc, c-ras) is not mandatory for achieving tumor inhibition [117]. Moreover, a major obstacle for establishing an effective “precision” based therapeutic approach in oncology is represented by the genomic heterogeneity of tumor – even within the same tumor of a single patient [118] – a condition that get worse after chemotherapy as the treatment can likely select more aggressive and resistant clones [119]. This picture would suggest that it is virtually impossible portraying the tumor “genomic fingerprint”, aiming at identifying key-druggable targets, as the targets are disparate, and change accordingly concurrently after the “evolution” of the gene-expression pattern. Current treatments are unable to cope with such an overwhelming complexity, and their acknowledged failure in curing cancer cannot be viewed as an unannounced surprise [12, 120, 121]. Ultimately, improvement in cancer survival observed in the last 40 years cannot be ascribed to anticancer drugs [122], and even drugs approved on the basis of better progression-free survival have been subsequently found not to produce better overall survival than the comparator drug [123]. Overall, “we overdiagnose, overtreat, and overpromise, with high costs and without clear benefits” [124].

This picture even gets into more complexity when mechanisms of action are considered. Mechanism-targeting drugs should be highly specific (i.e., acting only by hitting a single enzyme/pathway) and must affect only a fraction of the overall activity of the target, as a complete blockade of an enzymatic pathway would lead to undesirable, potentially lethal effects. However, the story of the imatinib mesylate – the “magic bullet” par excellence – taught us that even high specific target-based drugs are not as successful as initially thought. Indeed, despite Imatinib was designed to act on a single aberrant protein (BCR-ABL) expressed in cancerous cells, it was later shown to inhibit other targets (c-KIT and platelet-derived growth factor receptor), thus leading to unwarranted side effects [125]. In simple terms, target-based drugs rarely bind specifically to an only single target, therefore challenging the concept of magic bullet. Moreover, the use of knockout animals – where the target has been deleted for vindicating the causative role of that target – demonstrated later to be a debatable approach, leading in many circumstances to equivocal results. Indeed, during development, due to the redundancy characterizing biochemical pathways, the organism might activate compensatory contrivances and adaptive strategies. Thus, the outcome of a gene deletion can validate the hypothesis and the putative target only in few, more than ideal situations. Undeniably, it is far from being infrequent that knockout animals display unexpected and very surprising effects [126], which can be confidently attributed to the intertwined network among genes and pathways. In addition, when an adult animal – in which the target has already contributed to the development of the animal – is given a suppressive drug – the partial inhibition of the same target will have completely different consequences when compared to the situation in which the target has been “deleted” since the early developmental steps.

Additional complexity stems also from the fact that the observed changes in the putative causative target associated to the disease rarely disclose genuine causative relations. These targets are usually identified through statistical associations that provide no reliable information about the causative links [127].Footnote 3 Indeed, modification in target levels and/or their activity may likely arise as a part of the pathogenic process or, alternatively, they can represent adaptive and even antagonistic measures deployed by the system.

This is why we are currently dealing with too many targets and not enough target validation. “Target validation, crucial to rational drug design, is a concept often discussed but rarely defined […] to develop innovative drugs, we need smarter and faster target validation, not increasing numbers of new targets” [128]. Yet, demonstrating a causal role of the putative target is an unavoidable task that cannot be overcome by adopting more or less refined biometric strategies, as those suggested by the advocates of the so-called Big Data approach.

The Big Data Illusion

The current interest in big data has generated the widespread illusion that complex algorithms and number manipulation could solve problems without pursuing experimental investigations. Data handling does not produce any new information by itself. Correlation is not enough, though. Moreover, mere computational brute force cannot compensate for the lack of theory into which information from experiments need to fit. Definitely, computationally intensive tools for the exploitation of huge data sets are not designed to model the structural characteristics of the underlying system but only as very efficient ‘exploratory statistical tools’ that (at their best) can only act as aid for generating hypothesis. It is thus “vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge” [129].

Proper target validation would require a very different model (closest as possible to the physiological one), a suitable modulation of the target and – even more important – an adequate theory for understanding what we are doing and the meaning of data we are gathering. We need models and theoretical insights to help guide the collection and interpretation of data. The relatively meagre initial returns from the human genome project demonstrate that data do not translate readily into understanding, let alone treatments [130]. Commendable as these efforts are, they are fundamentally flawed as the relevance of differences in gene-expression pattern – viewed as the driver causal factor of the disease – is grossly overestimated, namely in oncology [131, 132].

Even using a rigorous predictive statistical framework, characterizing average behavior from big genomics data will not deliver ‘personalized medicine’ [133]. This is because correlations observed in different sets of data are not necessarily evidence of dependency. The problem of spurious correlations is familiar when it comes to the use of quantitative structure–activity relationship models and machine learning to predict the biological activity of molecules. Correlations may not tell us precisely why something is happening, but they alert us that it is happening. However, no matter their ‘depth’ and sophistication, machine-learning algorithms merely fit model forms to “selected”Footnote 4 data. They may be capable of effective interpolation, but not of extrapolation beyond their training domain. They offer no structural explanations of the correlations they reveal, many of which are likely to be false-positives. Furthermore, most correlations are spurious, i.e. very large databases have to contain arbitrary correlations [134]. Finally, diverse applications of Big Data Theory have met with limited success in scientific domains, up to now [135].

What is the illusion behind this all? Correlation can supersede causation, and science can advance even without coherent models, unified theories, or any mechanistic explanation at all. This claim assumes the primacy of correlations over causal explanation or, even more radically, the replacement of the latter with the former. Therefore, we are legitimate in asking if do we really come to “the end of the theory”? [136].Footnote 5 Indeed, “Big Data science renews the primacy of inductive reasoning in the form of technology-based empiricism and has inspired a view of the future in which automated data mining will lead directly to new discoveries [but] more data do not necessarily generate more knowledge. Data by themselves are meaningless. The idea that with enough data, the numbers speak for themselves hardly makes sense.” [137].

We are reminded of the story of the blind men and the elephant: local data are difficult to interpret without a (previous) mental model of a pachyderm. In a similar way, various big data initiatives are blindly groping about that great beast that we know as biology. We need theory to help envisage it in all its meaning.

Overall, these issues contribute in explaining why so many clinical trials ultimately provide scarcely reproducible results or – even worst – false findings [138, 139], thus supporting what is currently known as the “reproducibility crisis” in biology and medicine [140, 141]. Similar considerations apply for the so-called “precision medicine”, a disguised avatar of the target-based medicine, which has been brutally portrayed as an “overall failure”, almost in oncology [142].

What Should We Treat?

Disease as a Process Entailing Non-linear Networks and Environmental Influences, Spanning from Different Levels

As previously stressed, inadequacies of theoretical models of human diseases play a major role in explaining the difficulties encountered in pharmacological research. Indeed, the increase in the rate of drugs failing in late-stage clinical development over the past decade “has been concurrent with the dominance of the assumption that the goal of drug discovery is to design exquisitely selective ligands that act on a single disease target” [143]. This approach does not only overlook the relevance of multifactorial etiology of the disease, but also underestimates the robustness and resilience of the (pathologic) phenotype when stressed by a (pharmacological) perturbation. For instance, single gene knockout or complete silencing, have shown little, contradictory or even null effect on phenotype [144].

Robustness of the pathophenotype can be understood in terms of redundancy and alternative (compensatory) pathways, highly “structured” into scale-free networks, which are usually “triggered” in response to perturbations [145]. It has been observed that large transcriptional regulation networks act upon targets via different and alternative regulatory molecules. Indeed, multiple alternative pathways between regulator and target pairs are the rule rather than the exception. The “selective” activation of a pathway among many others should be ascribed to changing requirements of the context in which the system belongs [146]. Therefore, robustness in complex dynamical systems can be appropriately understood by considering the existence of multiple attracting domains (multistability), to which the system can suddenly switch (performing a transition remnant of the 1st phase transition observed in chemical physics). This behavior allows the system to access previously unexplored attractors, in response to environmental stresses, physical/chemical stimulation or small random perturbations. This property convincingly explain how cancer or bacterial infections can easily develop drug resistance by accessing new attractors (new stable states), thus making “precise” and targeted therapies, a futile attempt. Indeed, “by looking at the rich history of failures in targeting individual pathways, it is undoubtedly that targeting individual pathways may never be entirely successful” [147].

Intrinsic robustness has relevant implications for drug discovery, given that it put a special emphasis on the perturbations that can lead to several changes in the network activity/configuration associated to each disease [148]. However, it should be outlined that when tested in non-ideal conditions (i.e., in presence of “unfavorable” environmental milieu or when a small molecule/drug is added to the culture), the system displays an unexpected sensitivity. Indeed, nearly all genes (97%) are needed to ensure proper functioning in at least one condition when the cell-microenvironment cross talk is perturbed, namely when a genetic perturbation is combined with a chemical insult to a biochemical pathway [149]. These finding evidences that the relevance of genes and connected hubs within the network can be properly assessed only when the system is challenged by perturbing the microenvironment. To put the question in another way, the emergence of the disease-associated perturbed network can be identified only if the specific microenvironmental field is concurrently contemplated. Therefore, the dominant assumption that a critical, single target may suffice for obtaining a valuable therapeutic effect, once again, is cast on doubt [150]. Previously examples we mentioned – EBV-related diseases, sickle cell anemia and many others – clearly demonstrated that several factors participated in the genesis of the ailment, involving different levels of organism’s organization (from organelles to organs and systems, like the immune system). Those factors and levels are tightly intertwined and therefore a successful therapeutic strategy should embrace all of them if the aim is properly to cure the patients, and not only “to fix” a “singled out” pathway.

Furthermore, it is widely recognized that the current narrative of the natural history of a disease overlooks the participation of associated factors/systems in shaping different pathophenotypes (and their underlying mechanisms), which may likely explain subtle, but potentially important differences among clinical manifestations. A comprehensive approach to this problem would enable in providing a complex network structure, constituted by modular sub-systems, whose (non-linear) interaction will drive the organism response toward emergent properties, i.e. disease or health. Accordingly, as advocated by many scholars, human disease can be conceptualized as a (pathological) phenotype, i.e. an emergent property of the human body as a complex system [151, 152]. Therefore, those considerations prompted for a rethinking of the taxonomy of human diseases [153], whereby data obtained by investigating diverse levels (from the molecular one to the systemic response) would be embraced and incorporated into a unified operational model, in which response parameters should be provided by the overall system estimate, rather than on singled-out molecular target (Fig. 2a, b) [154]. Nonetheless, the identification of such gene-based regulatory networks may be insufficient to understand the emergence of cellular functions as well as the three-dimensional organization of living structure [155].

Fig. 2
figure 2

Causative factors in complex diseases. (a) Schizophrenia has a strong genetic component and the risk factor is 50% for monozygotic twins. It is likely that the disease can be ascribed by a developmental deficit during the prenatal period or around birth. Causes include diverse factors such as viral infections in utero or hypoxia during birth, implying that some pathogenic factors may only have been present only during the early developmental steps (and then are gone forever), while the genetic environment only confers an increased susceptibility to these environmental cues. Therefore, a treatment cannot influence those factors acting at the beginning of the disease process, nor can modify the “genetic predisposition”. Ultimately, the disease process cannot be reversed in the adult because the brain cannot be rewired back to the connectivity it should have had in absence of the pathogenic insults [ref. 112]. (b) Heart disease generally are supported by pathogenic cues largely unknown (the typical case is represented by idiopathic hypertension). On the contrary, a number of secondary causes and environmental factors converge in shaping the so-called proximal causes, mostly acting through modulation of the autonomic nervous system (ANS). In this case, there is no room for a treatment aiming at hitting the “primary causes”, whilst efforts are usually aimed at modulating the activity of ANS. Furthermore, the parameter of efficacy basically relies on the overall system estimate (electrocardiographic tracing, heart rate variability), rather than on singled-out molecular target [ref. 153]

Additionally, diseases as well as their “causative” targets are usually recognized and defined by their late-appearing manifestations [94]. Unreliability stems here from the erroneous and artefactual reproduction of the chronological steps through which the disease develops, while diagnostic parameters and putative causative factors are frequently (only) those associated with the steady state of the disease. This approach entails the obvious risk to consider a late emerging symptom/target as the driver-causative element of the pathogenic process.

Such shortcomings are deeply rooted on the disease model we usually adopted, where illness is merely considered as an almost “stable state”, characterized by a predictable, linear dynamics, with established well recognizable steps, from the onset up to the end (death or healing).

Yet, being a complex system, a disease would be better depicted as a non-linear dynamic process. As such, it displays classical features of complex systems, including resilience, sensitivity to initial conditions and multi-attractor accessibility. Namely, the latter point deserves to be explored as the evolution of the disease is conditioned by travelling across a landscape in which the system can enters into different attractors (i.e., different clinical outcomes), downstream a critical transition point. Around such points, the systems display an astonishing sensitivity to environmental cues, showing an increased fluctuation of a set of critical parameters [156]. The interaction among these components allows the system to overcome the (energetic) boundary of the attractor, moving towards a diverse stable state. The transition can be smooth or, quite more often, abrupt, remnant of the first phase transitions occurring in chemical physics. There is compelling evidence demonstrating that, as reported for many other fields in the natural world, such transition states exist in clinical medicine. The evolution of such a process can be modeled likewise a time-dependent non-linear dynamical system, in which abrupt deterioration is viewed as a phase transition at a bifurcation point [157, 158]. Depending on the progression level of the disease across the metaphorical landscape, the process can schematically describe different stages: i.e., a normal state, a pre-disease state, a disease state (characterized by steady-state conditions), and a number of critical states at which the disease can alternatively move towards progression or healing. Critical states, altogether with the preclinical state, display bifurcation points, i.e. a set of both internal and external conditions that can drive the process into a very different fate. Recognizing such critical points by constructing a dynamical network will likely help in understanding the logic of the process [159]. Moreover, identification of biomarkers (“early-warning signals”) indicating an imminent bifurcation or sudden deterioration before the critical transition occurs, can help in planning an appropriate management of the disease [160].

A Poly-Target Approach to Modify the “Pathogenic Field”

To cope with this complex challenge, in the last decade a new pharmacological strategy has been proposed on the basis of reconstructed, scale-free network of intracellular reactions. These networks are relatively insensitive to random damage [161], despite being rather vulnerable to attacks targeted to their most-connected elements (hubs). In assessing such an effect, experimental studies require the complete suppression of an element from the network to assess network stability [162]. Nevertheless, this strategy is highly controversial as lethal effects usually follow it [163]. On the contrary, evidence shows that the partial inactivation of more than one single target (“poly-target approach”) is more efficient than the complete inactivation of a single target [164]. As demonstrated by a number of studies [165, 166], this approach is gaining momentum, and it represents a promising area for further drug development.

This is especially true when the mechanisms of action of herbal mixtures, belonging to the so-called complementary and alternative medicine (CAM), come into play. CAM-related activities involve both classical and non-conventional mechanisms of action, nonlinear multiple interactions [167], poly-targets hitting [168, 169], supra-cellular effects [170], and detoxifying properties [171]. Overall, CAM mechanisms of activity embrace interconnectedness among different levels of organization of living organisms (from the molecular to the organ plane) in a contextual view of human beings that are inseparable from and responsive to their environments [172]. CAM treatment lies in a different theoretical approach, entailing clinical criteria, treatment targets and patterning of response. These beliefs and principles run counter to the assumptions of reductionism and conventional biomedical research methods that dismantle and test only single aspects of the CAM system. Instead, complex herbal remedies are in themselves true complex systems that interact with another complex system (the living organism), displaying a nested network of relationships [173]. Changes elicited by herbal mixtures can modulate self-organization and emergence in living organism by encompassing a wide array of processes – already acknowledged by the theory of complex systems – like synergetic (e.g., multicomponent global coherences) and critical phase transition across rugged landscapes, which allow the system accessing multiple “basin” and interactive multistability among a variety of attractors [174]. This dynamic pattern establishes bidirectional feedback across scales, explaining how small stimuli often result in large effects and how seemingly catastrophic events can, at times, result in merely a ripple effect across the system [175].

All of these entwined factors may help explaining the astonishing synergistic properties displayed by complex mixtures in respect to the effects triggered by individual components in isolation. It noteworthy that synergy cannot be full explained by the multi-target-based mechanism of action displayed by mixtures of natural compounds, given that pharmacological polyvalence of many plant constituents might explain an amplification effect by a factor 2 or 3 but not by a factor 10 or more, as reported by several studies [176, 177]. Such findings claim for a very different mechanism on which synergy relies.

Synergy

Here the Synergy is an “emergent” property – susceptible to precise mathematical definition [178] – and as such, belonging to the whole system, not predictable by properties of the parts [179]. Undeniably, attempts to identified mechanisms responsible for synergistic effects by studying components in isolation, provided elusive responses. Nonetheless, by using standardized mono- and multi-extract combinations against well- known standard drugs, synergy is clearly established for a number of herbal mixtures, tested in vitro as well as in vivo conditions [180,181,182]. In some instances, these synergistic effects can be explained by either previously unrecognized factors or “indirect” mechanisms of action, as such exerted by herbal component on patient microbiota. For instance, several Traditional Chinese Medicines Remedies (TCMR) comprise both soluble, active small molecules (including saponins, iridoid glycosides and flavone glycosides), as well as complex polysaccharides, which are usually considered as “irrelevant” given that is they are generally indigestible by oral administration and hardly absorbable in the gastrointestinal tract [183]. Accordingly, in modern industrialized TCMR preparation, polysaccharides are habitually removed to meet the requirements on purity and dosage amounts of the final products. Likewise, scientific research on TCM decoctions also excluded polysaccharides from biologically key chemicals [184]. However, such products not only lack efficacy when clinically tested, but also are deprived of scientific evidences. These findings suggest that such “irrelevant” polysaccharides must have an effect, in spite of everything. Indeed, convincing evidence have been provided demonstrating that such components (usually represented in the diet) may trigger complex therapeutic effects by targeting the host microbiota, selectively stimulating the growth of a subset of beneficial gut bacteria, and consequently to sustain the homeostasis of gut microbial community as well as the host health [185, 186]. These findings are worth of note, giving that the relationships between microbiota homeostasis and human diseases is currently an area of extensive research, and developing strategies to modulate microbiota function and composition could likely represent a reliable option in the management of several illness [189]. Undoubtedly, studies are warranted to explore in depth those mechanisms in whose synergy relies. On that field, we are just moving the first, tentative steps.

Conclusion

The network-based models, as those extensively studied by Csermely [165] and his team, despite their heuristic and epistemological value, only partially explain the unfathomable complexity behind the outbreak of a disease. Current network models of human diseases suffer from an excess of “specificity”, as they are habitually centered on an integrated representation of the cell biochemical and genetic pathways, and, as such, they discard the contribution of non-genetic factors and microenvironmental constraints. In fact, notwithstanding the sophisticated, even non-linear models adopted to represent complex genomic-proteomic networks, those approaches lie fundamentally on a “preformationist” view, being centered on a “genetic program”, which operate deterministically by itself. Accordingly, since the eighties, the agenda of pharmacology discovery was then dictated by aiming at discovering “relevant” molecules (along with their classical rules of interaction), abstracting from the true, physiological response of cells, tissues and organism. In this perspective, genes assume the fundamental causal role while cells simply act as causal proxies, dispensable because they represent an irrelevant intermediate level between the molecular input and the organismal output. Such framework, both theoretically and methodologically, is no longer tenable [187].

First, we need a comprehensive model able in capturing the complexity on which the disease relies (Fig. 3).

Fig. 3
figure 3

Hypothetical diagram sketching the non-linear dynamics interaction among different causative factors distributed along different hierarchical levels, including both internal as well as environmental determinants

This approach should point at ascertaining different targets (whose concurrence is mandatory for shaping the specific patho-phenotype we are dealing with). These putative targets are spatially distributed in diverse cells and tissues, thus involving different tiers (from sub-organelles to organs). Delocalization of the potential targets within different tissues and organs may likely affect a differential accessibility to drugs, whose bioavailability is tissue-dependent. Moreover, given that a disease is properly a dynamic process and not a steady state, treatments should be diversified according to a target selection dictated by the timing of the disease process. This approach can help in detecting pre-disease state or, alternatively, critical transition points from which the illness might access different attractors, leading ultimately to different outcomes (spontaneous healing, chronicity, disease exacerbation, death).

Second, the question now is how to design systems-oriented drugs that tackle both the multifactorial pathogenic determinants of the disease as well as the intrinsic robustness of the living organism [188]. Just to start with, this attempt requires an in-depth understanding of cellular- and organism-level dynamics, combined with advanced high-throughput screening and computational analysis tools. Instead of single target, pharmacology research should consider a polyvalent-based approach, i.e. the use of multiple drugs or drugs affecting several targets localized at different levels. Additionally, above and beyond classical pharmacodynamics, unconventional mechanisms of action urge to be investigated. Thereby, those non-canonical mechanisms of action as well as different, hierarchically structured level of “causation” should be integrated within network models currently in use.

Systems Biology has already recognized that, the main challenge, in both a biological and a mathematical sense, is to find out how to control complex network systems. Highly distributed nonlinear network systems are still quite “intractable”, in respect to simple feedback systems usually understood by means of classical and modern control theory. Yet, the most relevant hurdle stems from the still controversial nature of such system. The vast majority of studies deal with molecular or cellular-based model, and consequently do not consider the influence of the microenvironment or of the higher control-levels (immune, neuro-endocrine system) neither. Such a limitation is specifically exemplified by the kind of parameter we usually choose to ascertain the responsiveness to treatment. While disease response is habitually simplified by describing changes in a single (or a few) parameter, we instead have to move from target-related parameters to system-parameters, which could capture those modifications that could likely impact on the whole systems dynamics. Such an approach is partly ensured by metabolomics studies [189, 190], given that metabolites fluctuations usually amplify subtle modulation of the genome/proteome network, thus representing a more sensitive criteria for grasping changes in complex systems dynamics [191, 192]. By this way, “metabolomics represents more closely the phenotype of an organism” [193].

Third, we should shift from targets to processes, therefore pointing to influence several targets (poly-target treatments), conceivably by entailing different mechanisms of action. Modulation of processes implies we should be able to “redraw” the disease-related landscape, favoring the system displacement from pre-clinical state or true disease-states towards healing pathways. Disease should be “reverted”, eventually involving also a “reprogramming” of the gene-regulatory network. A remarkable case in point is that of tumor reversion, a promising field of investigation [194]. An increasing number of reports has ascertained the occurrence of cancer reversion, both in vitro and in vivo [195,196,197]. This process encompasses mandatorily a change in the cell-stroma interactions, leading to profound modification in tissue architecture. As cancer can be successfully ‘reprogrammed’ through the modification of the dynamical cross talk with its microenvironment, cell-stroma interactive network must be recognized as a target for pharmacological intervention [198]. It is worth noting that several natural compounds as well as morphogenetic factors obtained from eggs [199] or animal embryo [200,201,202], demonstrated to be able in inducing a significant reversion of the tumor phenotype in a wide array of cancer types.

Fourth, we have to look at compounds displaying “pleiotropic” effects, i.e. able to tackle several targets. A dominant paradigm in drug discovery is the concept of designing maximally selective ligands to act on individual drug targets. However, many effective drugs act via modulation of multiple proteins rather than single targets, and we previously recalled that exquisitely selective compounds, compared with multitarget drugs, might exhibit lower than desired clinical efficacy. It is worth noting that a number of molecules from natural herbs and foods share this property [203]. Natural products and their derivatives have historically been invaluable as a source of therapeutic agents. Despite the disbelief that such class of potential drugs encompassed in the last decades, recent updates and technological advances, coupled with unrealized expectations from current lead-generation strategies, fostered a renewed interest in natural products in drug discovery [28, 204]. Indeed, many natural molecules, prone to be eventually engineered to amplify their efficacy, have already recognized to be effective in the treatment of several diseases [205]. High throughput techniques and new extraction strategies can be helpful in identifying a class of beneficial compounds. A remarkable case in point is that of the anti-malaria properties of Artemisia extract by relying on the ‘traditional’ efficiency recorded for this plant, while the ‘rest of the World’ was searching for a hypothetic ‘synthetic magic bullet’. In fact, the pharmacological principle was extracted according a truly ancient protocol dating from 300 BC (according to the Handbook of Prescriptions for Emergencies) [206] because the ‘modern’ purification methods were ineffective. The rationale for extracting this specific principle was suggested by oral medical tradition dating back to the medieval age. The extracted drug introduced into treatment in the ‘70s was not ‘recognized’ by the Western world until a few years ago and is still not patentable (only the extraction procedure has been marketed). This is an outstanding example of technological failure not only in achieving the required ‘goal’, but also demonstrates that ‘technology’ (in excess!) may delay the discovery of new solutions. Artemisa annua is currently deemed a pivotal asset in malaria management and it worth of notice that such a result was achieved by working far from current scientific mainstreams, in scientific structures with loose links to the so-called ‘Big Science’, and originally published only in Chinese journals by Dr. You-You. However, such an ‘unconventional’ approach led to her being – finally – awarded the Nobel Prize [206].

Thereby, what should we do? Clearly, many ventures in the biotechnology industry, from the earliest recombinant DNA based schemes for particular products to certain of the agricultural genetically modified organisms, have been successful (though not always uncontroversial). Yet, the looming difficulties will be primarily on the premises on which therapies are planned. And, for these, the companies may well have to go back to academia or, at least, to academics studying new and unexplored paths. Namely, systems biology, which today is still largely an enterprise of “academic” (i.e. non-commercial) interest may find itself increasingly incorporated into the research programs of industrial enterprises. How to maximize creativity in biological science is a topic rarely discussed and yet critical to success in improving health. We believe that the needed approaches are not simply to flog individuals to try harder but to build systems and infrastructures that enhance creative effort. Lateral thinking can and should be taught. Probably, as happened in the past, new avenues new theoretical approaches and different putative drugs – should be explored to counteract the decline in drug discovery we are facing nowadays. Indeed, time is gone to address such challenging issues and to restore both confidence and efficiency to the pharmaceutical industry. Time is ripe to move on this direction.