1 Introduction

The Somatic Mutation Theory (SMT) (Hanahan and Weinberg 2000) has been recently challenged on its fundamentals by Tissue Organization Field Theory of carcinogenesis (TOFT) (Sonnenschein and Soto 1999). TOFT is part of an old research tradition dating back to the XIX century (Triolo 1964; Vineis et al. 2010) updated and modified over 15 years ago by Sonnenschein and Soto (1999). According to TOFT, cancer is a disease happening at the tissue level of biological organization arising as a consequence of the disruption of the morphogenetic field (MF) that orchestrates histogenesis and organogenesis from fertilization to senescence. TOFT has challenged the hegemony of SMT, which, starting with Theodor Boveri (1929), posits that cancer arises from a single cell, due to an accumulation of genomic somatic mutations (Hanahan and Weinberg 2000).

A recent paper from Bedessem and Ruphy (2015) denies that TOFT could be an alternative hypothesis to SMT by claiming that both theories suffer from a “lack of non-ambiguous experimental proofs”, and because the “level of argumentation (is) insufficient to irrevocably choose one of the two theories”. However, Bedessem and Ruphy mainly criticize TOFT and deny that a true ‘crisis’ exists in this field that would justify a so-called ‘paradigm shift’. In addition they assert that only narrow differences exist between SMT and TOFT and that those divergences arise mainly from the ‘metaphysical ground’, because “philosophical arguments” are used to “compensate the deficiency in the empirical demonstration”. Bedessem and Ruphy go on to minimize the consequences of the shortcomings hitherto gathered within the SMT framework. Consequently their proposal suggests an “integration” between SMT and TOFT in order to overcome the hypothesized irreconcilability.

Herein, we will rebut Bedessem and Ruphy’s views and in order to do so we will consider: a) what are the current shortcomings of the SMT; b) the different experimental and philosophical premises on which SMT and TOFT respectively rely; and finally c) whether a crisis is taking place in the current framework, and whether this would justify a paradigmatic shift.

2 Evidence of the SMT Failure

2.1 The Role of Mutations

SMT entirely relies on the “causative role” of somatic mutations during carcinogenesis. Bedessem and Ruphy (2015) state that “cancer cells often exhibit large scale genetic perturbations, with a high number of local mutations and chromosomal anomalies. This is contradictory with the classical version of SMT, which considers that tumorigenesis is due to punctual genetic mutations”. While this is a worthy argument, the narrative ignores several arguments that stand against the plausibility of the ‘causative role’ of mutations in cancer initiation. To begin with, it has been acknowledged that mutations have been detected only in 30–40 % of tumor samples (Chanock and Thomas 2007). It is then relevant to ask to what factor(s) other than mutations the other 60 % of tumors are due. Also, somatic mutations have now been detected in normal as well as in inflamed tissues (Washington et al. 2000; Zhang et al. 1997; Lupski 2013; Yamanishi et al. 2002). Moreover, deep-sequencing analysis has revealed that non-malignant skin cells in healthy volunteers harbor many more so-called cancer-driving mutations than expected (Martincorena et al. 2015).

Evidence arguing for the irrelevance of mutations as a target for therapeutic management comes from studies performed on chronic myelogenous leukemia. It has been claimed that the abnormal fusion tyrosine kinase BCR-ABL acts as an “oncogene” and is deemed the key-initiating factor in myelogenous neoplastic transformation. Inhibition of the corresponding oncoproteins by means of tyrosine kinase inhibitor (TKI) has indeed lead to significant short-term beneficial responses, yet without achieving any benefit in terms of long-term survival. This latter failure has been ascribed to the fact that a reservoir of cancer stem cells still proliferate because they lack the alleged targeted-mutated gene and they are therefore insensitive to the TKI (Pellicano et al. 2014; Jiang et al. 2007). Thus, accordingly to this rationale, myeloid cells would become transformed by an oncogene that curiously is absent among the cancer stem cell population from which cancer is thought to arise. Recently, it has been shown that the genome signature of three different ependymoma tumors lacks tumor-driving mutations (while displaying epigenetic modifications), whereas others show neither genomic mutations nor epigenetic aberrations (Mack et al. 2014). This finding implies that cancer may arise even in the absence of any ‘genomic deregulation’ or ‘mutation’ (Greenman et al. 2007; Imielinski et al. 2012; Lawrence et al. 2013). Indeed, an additional challenge to SMT comes from recent sequencing studies in which zero mutations were found in the DNA of some tumors. Remarkably, sequencing studies dealing with that subject made little mention of the fact that some tumors had zero mutations (Kan et al. 2010; Baker 2015).

Therefore, as emphasized by Nature magazine editorialist, “it urge(s) us to revisit the role of gene mutations in cancer”, and address, “if not gene mutations, what else could cause cancer?” (Versteeg 2014). In summary, on the one hand, those data challenge the presumptive causative role played by somatic mutations in cancer onset, while on the other, their being so rarely quoted and appropriately discussed becomes puzzling.

Another source of controversy is represented by cancer arising in transgenic animals or in humans as ‘hereditary tumors’. Mice “engineered” to harbor several oncogenic driver mutations develop cancer, lending apparent support to the mutation-driven cancer model (Van Dyke and Jacks 2002). However, in transgenic experiments, as well as in hereditary cancers, these models are not concerned with a single ‘renegade’ cell committed to becoming a cancer, but with a whole organism in which all the cells, and hence all tissues and organs are ‘mutated’. Additionally, some of these transgenic systems could be considered as ‘conditional’ models, given that transgene expression is highly dependent on the ‘permissive’ effect of some unknown microenvironmental factors. Indeed, addition of doxycycline is required to either induce or repress (according to the experimental system used) the transgene expression (Baron and Bujard 2000). This finding reveals a very relevant difference regarding mutation activity in hereditary cancers and further stresses the fictitious character of such models.

Both hereditary cancers and tumors arising in transgenic animals may indeed be viewed as a consequence of inborn inherited errors of development—the result of a process initiated by a germ-line mutation(s) in the genome of one or both gametes (sperm and/or ovum)—as opposed to sporadic cancers (Sonnenschein et al. 2014a). The mutated genome of the resulting zygote will endow all the cells in the morphogenetic fields of the developing organism with such a genomic mutation(s). Examples of this variety of inborn errors of development include retinoblastoma, Gorlin syndrome, xeroderma pigmentoso, BRCA-1 and-2 neoplasia, and many others (Garber and Offit 2005). Therefore, cancers arising from inborn errors of development should be considered as separate pathogenetic entities when compared to sporadic cancers, given that they ‘emerge’ from very different morphogenetic fields (Sonnenschein et al. 2014a).

Future refinement of analytical and bioinformatics techniques may allow the identification of unexpected so-called cancer-driver mutations (Raphael et al. 2014). However, those newly uncovered mutations may be reciprocally exclusive. The effect of a mutation may indeed nullify that of another mutation. This happen, for example, when a mutation fosters the expression of a molecular factor while the concomitant mutation inhibits the synthesis of the same factor (Vandin et al. 2012). Alternatively, as the number of mutations increases, their respective ‘causal power’ becomes significantly reduced; that is, if carcinogenesis requires a hundred, or even thousands of mutated genes, the pathogenetic ‘weight’ of each single mutation decreases in a proportional manner.

2.2 Are Current Targeted Therapies Valid Arguments Favoring the SMT?

Bedessem and Ruphy claim that “some targeted therapies have provided successful results” and that these results provide a “strong experimental argument” that favors the SMT. However, references quoted by the authors are limited to a few instances, while ignoring that even SMT followers acknowledge “that targeted therapies are generally not curative or even enduringly effective” (Hanahan 2014). Therefore, the “promise of molecularly targeted therapies remains elusive” (Kamb 2010). Others have shared on this conclusion (Seymour and Mothersill 2013; Wheatley 2014). Admittedly, some limited progress has been noticed for a few specific cancers (Bailar and Gornik 1997; AACR Cancer Progress Report 2012). Mortality changes reflect declined incidence (mostly due to reduced tobacco smoking), or early detection, whereas drug innovation is likely to have dropped cancer mortality rate by only 4 % (Lichtenberg 2010). Unambiguously, cancer remains a major worldwide public health problem. Thus, the War on Cancer is far from being won (Ness 2010). One can conclude that the most promising approach in cancer management still relies on prevention (reduce carcinogen exposure and vaccination against Hepatitis viruses and HP viruses).

3 Epistemological Bases of Current Cancer Theories

Experimental and clinical evidence have provided a reliable ground on which TOFT has been built (Clark 1995; Potter 2001; Arnold et al. 2002; McCullough et al. 1997). Currently, SMT and TOFT are recognized to be different paradigms by independent researchers (Wolkenhauer and Green 2013; D’Anselmi et al. 2011; Sonnenschein et al. 2014b; Satgé and Bénard 2008; Prehn 2005; Schwartz et al.2002; Baker et al. 2010; Laforge et al. 2005; Longo and Montévil 2014; Levin 2012; Smythies 2015; Tarin 2011).

The SMT and the TOFT primarily diverge on their different epistemological premises. For the sake of simplicity, SMT is actually seen as a representative example of reductionism, while TOFT adopts an ‘organicism’ framework (Marcum 2010). Both approaches arise as an attempt to capture the complexity underpinning biology (Brigandt and Love 2015). The search for ‘intelligibility’ of the ‘natural world’ has historically been dominated—since Descartes and De la Forge (1664) by the search for a ‘fundamental invariant’, thought to represent the ‘key factor’ from which every process originates and develops according to a few simple rules (Miquel 2008). As an analogy to what happened in physics, in biology the key factor was surmised to be placed at the lowest level—i.e., the molecular one—which was assumed to represent a ‘privileged level of causality’ to which every other level and issuing complexity can be ‘reduced’. By reductionism it is meant the concept that every phenomenon can be explained by principles governing the smallest components participating in the observed phenomenon (Nagel 1998). According to the Stanford Encyclopedia of Philosophy (2015), “ontological reduction is the idea that each particular biological system (e.g., an organism) is constituted by nothing but molecules and their interactions. In metaphysics, this idea is often called physicalism”. Namely, current prevailing conceptions of physicalism reject downward causation because it is not compatible with the claim of physicalism that “all biological principles should be underived law about physical systems” (Soto et al. 2008a). Instead, biological systems are truly characterized by downward causation and diachronic emergence that cannot accommodate physicalist’s reductionist framework. Indeed, in the context of complex systems, physical forces and constraints acquire new properties (emergence) that are not anticipated or fixed at the beginning of a process: mechanical force may acquire novel properties, such as that of inducing gene expression, which cannot be predicted from our knowledge of the physical world. In this sense, physicalism is a true ‘reductionist’ approach, as it is unable to accommodate emergent properties of the living (Soto et al. 2008a).

3.1 SMT and Reductionism

Genetic reductionism (frequently regarded as synonymous with ‘genetic determinism’) is the belief that human phenotypes and even complex traits of living beings may be explained solely by the activity of genes, to which every biological feature may be ‘reduced’. The inappropriate use of metaphors borrowed from information theory has even reinforced the ‘causal’ power with which genes are credited (Longo et al. 2012). Thus, genetic reductionism may be viewed as an even more radical stance regarding ‘generic’ reductionism, given that for genetic reductionism only ‘genes’ matter, with the exclusion of any other sort of molecules or mechanisms (Mazzocchi 2008). Genetic reductionism underlying SMT may be sketched as follows: a) the genome is the ‘ontological’ hardcore of an organism: it defines both the phenotype as its dynamical responses to environmental stimuli, according to a ‘program’ hidden in the DNA and dictating, for better or worse, a cell’s fate. b) In this way, the gene has replaced the Aristotelian concept of deus ex machina, becoming so far a ‘deus in machina’. As a consequence, the causality principle must be found in the DNA. The causality chain is believed to be proceeding from genes to proteins—ultimately determining every cell’s function and structure (horizontal unidirectional causality) and from cells to tissues, organs and even the whole organism along a vertical tree (bottom-up causality flow). c) Dynamic interactions across this endless chain of causality are viewed as behaving in a linear, predictive manner. d) A further corollary, hastily borrowed from information theory, implies that biological functions are entirely governed by the DNA-based ‘program’. Hence, as an analogy with computers, modulation of cell activities is put in an on/off state by ‘molecular signals’ emanating from genes. These assumptions—each of them belonging to a sort of “genomic metaphysics” (Mauron 2002)—are no longer supported by experimental data (Weatherall 2001; Longo et al. 2012; Shapiro 2009). Yet, these concepts persist in shaping the mainstream as well as the methodological activity of scholars, despite being criticized over the past years (Strohman 2002; Noble 2008a, b). It is by now recognized that biological causation takes place at different and entrenched levels. Namely, the non-linear dynamics occurring at lower (molecular) levels is shaped by higher-level constraints that are superimposed on the intrinsic stochasticity of gene expression (Dinicola et al. 2011; Pasqualato et al. 2012; Kupiec 1983, 1997; Dokukin et al. 2011, 2015).

To accommodate such disturbing results, ‘tough’ reductionism has been subject to extensive critical reappraisal attempting to reframe the classical ‘vision’ upon which molecular biology has relied since its beginnings (Cornish-Bowden 2011). This endeavor has involved collecting heterogeneous disciplines in an effort to refine the understanding of gene-regulatory processes. Omics disciplines have added some useful insights into the intricate networks in which genes, proteins and other macromolecular components are entangled, yet without questioning the very basic fundamentals of molecular biology (Kitano 2002). According to an ‘omic’ perspective, causality does not depend on a single or on a discrete number of genes. Instead, it depends on supra-genomic wide regulatory rules, constraining the genome to function as a ‘whole’ (Tsuchiya et al. 2009). This is indisputably a step forward, and it allows one to grasp how the genome acts as a ‘coherent system’, characterized by non-linear dynamics, hysteresis and bistability (Bizzarri and Giuliani 2011). With few exceptions, such approaches are still framed according to a gene-centered paradigm, where the biological causality flows from genes to cells. Those models do not take into consideration how such processes are actually shaped by supra-cellular levels (tissue microenvironment) (Müller and Newman 2003). On the contrary, an integrated approach, spanning from molecules to cell-stroma interactions, may instead explain ‘emergent’ properties of living complex systems (Bizzarri et al. 2013). Reductionism is unable to deal with this complexity. This impossibility must be viewed as an ontological unfeasibility, because reductionism cannot explain emergent properties, not due to methodological inadequacies, but because of its absolute impracticality. An explanatory case in point is evidenced by the Rayleigh–Bénard convection, a widely recognized paradigmatic example for studying inter-levels causation (Bishop 2008). Classical philosophical accounts of causation—e.g., counterfactual, logical, probabilistic process, regularity, structural—have indeed been heavily influenced by a nearly exclusive focus on linear models (Pearl 2000). However, in the context of complex systems many additional channels and levels of interaction not envisioned in linear-based models are usually observed. Dealing with these additional forms of interaction it seems possible only within a new, non-reductionist, paradigm (Longo and Montévil 2014). In conclusion, being cancer an ‘emergent’ phenomenon, it cannot be ‘reduced’ to its lowest components whatever its dynamics.

3.2 TOFT and Organicism

TOFT has been shaped according to an open-ended organicism (Gilbert and Sarkar 2000). Organicism—more commonly known as “systems theory”—focuses on systems rather than on single components, putting the emphasis on bottom-up and top-down causation (Soto et al. 2008a). TOFT posits that ‘causative’ processes take place and are driven by the ‘morphogenetic field’ (MF) (Sonnenschein and Soto 1999). Originally introduced by Weiss (1939), MF has received renewed appreciation in the last few years (Gilbert et al. 1996). MF is acting within a living system by ‘driving’ the dynamics of all components (genes, cytoskeleton, enzymes and so on), constraining them into a coherent behavior. Undeniably, those processes are intrinsically ‘bonded’ by the very nature of raw components, the past history of the system, the physical-electromagnetic constraints, and chemical gradients.

Within MFs, a specific class of molecules—morphostats—mostly produced from stroma (fibroblasts) and non-epithelial cells (macrophages), have been credited with modulating cell proliferation. Morphostats and morphogens perform a wide array of functions in maintaining tissue architecture and appropriate development (Potter 2001). In turn, both of them are tightly regulated by tissue constraints. Therefore, any disruption in tissue architecture may lead to a deregulation in themorphostats/morphogens balance, leading to further abnormalities in cell proliferation and tissue organization. According to this framework “loss of normal tissue microarchitecture is a (perhaps the) fundamental step in carcinogenesis” (Potter 2007), a conclusion compatible with the TOFT. Moreover, pre-neoplastic lesions triggered by alterations in the MF may in turn contribute to further deregulating the surrounding milieu architecture and, “once this ‘pathological’ new niche is formed it set(s) the stage for tumor progression to occur” (Laconi 2007). The integrated sum of all of these factors determines the MF’s intrinsic ‘strength’ which allows cells travelling across an attractor’s landscape to ultimately find the most appropriate ‘location’ (phenotypic determination) (Nicolis and Prigogine 1989; Huang and Ingber 2006–2007).

A meaningful case in point is constituted by a very specific situation in which the MF is influenced by gravitational forces. Remarkably, living cells exposed to microgravity spontaneously acquire two distinct phenotypes, despite the absence of changes in their genome expression pattern (Testa et al. 2014; Masiello et al. 2014; Pisanu et al. 2014). The non-equilibrium theory (Kondepudi and Prigogine 1983) provides a convincing explanation of such counterintuitive phenomena. A far–from-equilibrium open system can form spatial stationary patterns after experiencing a phase transition, leading to new asymmetric configurations. These states are equally accessible, as there exists a complete symmetry between the emerging configurations as reflected in the symmetry of the bifurcation diagram. However, the superimposition of an external constraint may break the system’s symmetry, bestowing a preferential directionality according to which the system evolves by occupying a selected state (phenotypic determination). On the contrary, when constraints are removed, as occurs in microgravity, the driving control on the phenotypic switch is lost, and the system acquires additional degrees of freedom. In this way the systems will display more than one phenotype, in principle ‘incompatible’ with its native genetic ‘commitment’ (Bizzarri et al. 2014). Those investigations may also explain how determinant a ‘weak’ force could be in deflecting a normal development path, thus shaping morphologies and functions (Bravi and Longo 2015).

By analogy, something similar happens within an embryonic MF or normal 3D-environment. Results from studies in which cancer cells have been cultured in specific morphogenetic fields (3D, embryonic or maternal) reinforce the TOFT narrative by showing how microenvironments overcome the activity of mutated genes, hence promoting tumor ‘reversion’. By placing cancer cells into “normal” microenvironments—i.e., by restoring a normal, strong morphogenetic field—the tumor phenotype can be reverted into a normal one. Cancer cells exposed to embryonic morphogenetic fields (Bizzarri et al. 2011; D’Anselmi et al. 2013; Pierce and Wallace 1971), or cultured in 3D-reconstructed biological microenvironments mimicking the normal tissue architecture (Willhauck et al. 2007; Maffini et al. 2004) undergo apoptosis and differentiation, eventually ending in the reprogramming of a “normal” phenotype (Mintz and Illmensee 1975; Hendrix et al. 2007; Kenny and Bissell 2003).

Namely, the fate of transplanted tumors is highly dependent of the characteristics of the microenvironment. Indeed, even when transplanting highly metastatic cancer cells, no tumors arise when they are introduced into an embryo field (Lee et al. 2005). Additionally, several studies have highlighted that the likelihood of successful tumor transplant is highly dependent on the age of microenvironmental stroma (Maffini et al. 2005; Marongiu et al. 2014). Usually, this evidence is either dismissed by SMT supporters, or else it is ignored being considered an “odd” exception. Instead, within the TOFT framework, such paradoxical data constitute a pivotal proof of concept (Bizzarri and Cucina 2014).

Analogous cases have been provided by studies stressing the importance of endogenous, weak electro-magnetic fields in regulating morphogenesis, left–right patterning and many other developmental processes (Pai et al. 2015; Tosenberger et al. 2015). Moreover, a study concluded that depolarization of a glycine receptor-expressing channel in Xenopus Laevis neural crest may induce a full metastatic melanoma without any involvement of a mutation, carcinogen, or DNA damage (Blackiston et al. 2011). Deregulation of the bioelectric environment may rewire protein interaction networks (Taylor et al. 2009), and comprehensive reviews of the role of bioelectric processes in cancer, highlighting how bioelectric activity is a constitutive partner of the morphogenetic field involved during cancer transformation, have also been published (Chernet and Levin 2013).

Overall these data point out that: a) MF may counteract the supposedly carcinogenic effects of point mutations. Indeed, despite many ‘biochemical mistakes’ occurring within cells as a consequence of ‘altered’ gene activity, MF may efficiently restore a normal phenotype by inhibiting/reverting the malignant phenotype. b) Disturbed interactions among cells and their microenvironment lead to a ‘deregulated’ MF, and ultimately to the emergence of cancer, even in the absence of any mutation (Baker and Kramer 2007). Another example of these interactions became evident when undifferentiated embryonic stem cells were transplanted into the rat brain at the hemisphere opposite to an ischemic injury; transplanted cells migrated along the corpus callosum towards the damaged tissue and differentiated into neurons in the border zone of the lesion (Erdö et al. 2003). In the homologous mouse brain, the same murine embryonic stem cells did not migrate, but they produced highly malignant teratocarcinomas at the site of implantation. The authors concluded that this study “demonstrated that the interaction of embryonic cells with different microenvironments determines whether regeneration or tumorigenesis is promoted”. This is precisely what TOFT posits: cancer development is strongly reliant on the dynamic interactions occurring among cells and their microenvironment. Additionally, those results further support the notion for which carcinogenesis must be viewed as ‘development gone awry’ (Soto et al. 2008b).

4 Basic Premises

Quiescence or proliferation as the default state? From the experimental point of view, SMT posits that cancer is a “cell-based disease”, whereby a normal cell over time accumulates mutations that affect the control of its proliferation, thus becoming a ‘cancer cell’ (Weinberg 1998). According to this model, the ‘default state’ of a cell is quiescence, and proliferation must therefore be actively promoted (through ‘signaling molecules’, like ‘growth factors’ and ‘oncogenes’). The second SMT premise assumes that mammalian cells are in a resting state. Hence, motility should also be actively triggered (Varmus and Weinberg 1993).

From an evolutionary point of view, SMT premises are counterintuitive given that proliferation is the default state in either prokaryotes or unicellular eukaryotes (Luria 1975). Why would metazoan cells have changed their default state? No cogent rationale is advanced to explain this alleged change in strategy. Many experimental data provided support to the hypothesis of proliferation as default-state. Findings from hormone-dependent cancer cells (Sonnenschein et al. 1996), lymphocytes (Yusuf and Fruman 2003), hematopoietic cells (Passegué and Wagers 2006), and especially stem cells (Ying et al. 2008) provide evidence in support of proliferation as the default state of mammalian cells.

An analysis of the default state concept has recently been performed by Rosenfeld (2013) who acknowledges the relevance of the default state concept in biology while noting that “there has been no attempt in the literature to provide a more or less crisp definition” of this notion. Rosenfeld outlines that any attempt to provide a compelling definition should be framed within the context to which the cell belongs. Given that “the phenotypic traits of individual cells are shaped by interactions within their respective communities”, the “default states of the cells freed from the restraints of tissue structure may not be identical, or even similar, to those that are densely packed and immobilized in tissue”. Consequently the search for a univocal definition of the default state “is elusive”, as the default state is “governed by some external layer of control or by supervisory authority”. In accordance with this perspective the ‘default state’ of a cell would change according to the context in which cells are positioned. Admittedly, a lot of (external) constraints can effectively modify the proliferation status of a cell population. For instance, the usual milieu in which cells are cultured and studied (i.e., in vitro conditions) is an artificial one, and it can be inferred that the alleged mandatory requirement on ‘growth factors’ for sustaining proliferation in vitro is obviously a consequence of the above mentioned context (Sonnenschein and Soto 1999). But this argument is unrelated to the concept of default state, as this notion refers to “the state that needs not to be actively maintained” (Huang 2009). This definition makes a conceptual equivalence among the default state and the physical concept of inertia: a cell does not modify its proliferative state until external forces (constraints) supervene to change it. This is precisely what happens when we are referring to mammalian cells located in their ‘natural’ environment, i.e. a tissue. Indeed, experimental data show that proliferation is physiologically under the control of negative feedback regulators. The hypothesis of tissue control of proliferation (chalones) goes back to the 1950s, and it gained some acceptability in the 1990s (Elgjo and Reichelt 2004)with the discovery of myostatin and its role as a feedback controller of muscle growth (Lee and McPherron 1999). Since then, several other chalones have been identified in various tissues, many of which are claimed to be members of the TGFβ family (Gamer et al. 2003).

Compelling evidence showing that proliferation of estrogen-sensitive breast cancer cells is under negative control was provided in the 1980s when estradiol was shown to increase the proliferation rate of estradiol-sensitive cells by neutralizing a specific serum-borne inhibitor (Soto et al. 1986; Soule and McGrath 1980). Similar results have been obtained in several other tissues, including prostate cancer, hematopoietic cells, liver, leukemia (Sonnenschein et al. 1989, 1996; Mallucci and Wells 1998; Passegué and Wagers 2006; Yusuf and Fruman 2003; Wang et al. 2004; Lacorazza et al. 2006). Furthermore, cell proliferation has been called the “ground state” in the context of embryonic cells, because it is inherent to the system, and does not require stimulation (Wray et al. 2010; Ying et al. 2008).

Overall, this evidence prompted E. Parr (2012) to state that “a key difference between these models (TOFT vs. SMT) is that quiescence is postulated to be the default state of ‘normal cells’ in SMT, whereas proliferation is assumed as the default state of cells in TOFT” (Parr 2012). Parr also mentioned that “it seems highly unlikely that complex, ligand-dependent signaling pathways emerged de novo as a requirement for growth”. Accordingly, mammals are “systems in which growth factors can be withheld to control growth. Thus, the quiescence of cultured metazoan cells in the absence of growth factors would not reflect a passive lack of growth stimulation but rather an active process of growth inhibition”. As a consequence, this conclusion “further suggests that gene products that appear to ‘‘promote’’ growth actually act to reveal the cell’s innate tendency to grow.”

Similar considerations apply for motility. Everywhere, the default status of prokaryotes and eukaryotes is motility. Why would cells in multicellular organisms escape this property? Indeed, embryonic cells, as well as somatic adult cells display motility in different settings: development, connective adjustments, apoptotic processes, wound repair (Zajicek et al. 1985; Worbs and Förster 2009). Therefore, motility displayed by cancer cells represents only the recovery of an intrinsic cellular function.

5 Is SMT Facing a Crisis? The Need for a Paradigm Shift

Bedessem and Ruphy (2015) are reluctant to admit that SMT is experiencing an existential crisis. However, the lack of reliability of the premises adopted by SMT has been consistently criticized, since the’70 (Pierce et al. 1978; Coleman et al. 1993; Clark 1995; Wigle and Jurisica 2007).

Finally, the SMT paradigm is perceived as inadequate by an increasing number of scientists (Barcellos-Hoff and Rafani 2000; Barclay et al. 2005; Arnold et al. 2002; Baker 2014). In fact, even Robert A. Weinberg, a long-term advocate of the gene-centric paradigm in cancer research, has recently acknowledged that the expected evidence to vindicate explanations provided by SMT has been disappointing. Quoting him, “half a century of cancer research had generated an enormous body of observations […] but there were essentially no insights into how the disease begins and progresses” (Weinberg 2014). Yet, 2 years following this explicit admission of failure, the search for mutated oncogenes and/or tumor suppressor genes continues unabated. Weinberg added “…But even this (the gene-centric view) was an illusion, as only became apparent years later […] the identities of mutant cancer-causing genes varied dramatically from one type of tumor to the next […] Each tumor seemed to represent a unique experiment of nature”. Indeed, experimental data cast doubts on the role of gene mutations in cancer, “suggesting that mechanisms for cancer initiation are broader than is typically thought” (Weinberg 2014). This candid assessment by a thought-leader who sided with SMT for the last four decades favors discarding the SMT and adopting a different model of explanation, thus generating an opportunity to explore alternatives that might lead to a genuine “paradigm shift” (Strohman 1997; Sonnenschein and Soto 2000; Baker 2015).

According to Kuhn (1962), a paradigm shift has several properties. The first one is incommensurability, where the scientists on either side of the paradigm have great difficulty in understanding the other’s point of view or reasons for adopting the premises of the competing side. Yet, speaking of ‘irreconcilability’ regarding the two competing theories (SMT and TOFT) might be a more appropriate characterization of the current situation. This irreconcilability depends on radical divergence existing among basic premises to which different paradigms rely. Copernican theory was irreconcilably different from the Ptolemaic one, given that the central place in the solar system was occupied by the Earth in the latter and the Sun in the former. It is obviously impossible to support at the same time these two opposing hypothesis by constraining them into a ‘unified’ cosmological model. By analogy, SMT and TOFT cannot be merged because the premises on which those frameworks rely are incompatible: the default state of the cell can be considered either quiescence (SMT), or proliferation (according to TOFT). The two default states cannot be operational at the same time. Thereby, according to Kuhn’s perspective, the two theories should be considered mutually irreconcilable.

The second property of a paradigm shift is represented by the accumulation of contradictory results, where the current hegemonic paradigm ultimately generates a body of observations that not only fails to support that paradigm, but also points to obvious weaknesses in its method and theoretical outlook. Indeed, as previously seen, SMT cannot provide reliable explanations for troubling and contradictory results. Examples of such results include non-genotoxic dependent carcinogenesis, the presence of mutated genes in normal tissues or a lack of mutations in a significant fraction of tumors, the genomic heterogeneity of cancer cells issued from a same tumor sample, tumor reversion after exposition to embryonic or otherwise modified morphogenetic fields (Baker 2015).

The above-mentioned arguments solidify the conclusion that TOFT and SMT encompass an irreducible competition recognizable at the experimental, epistemological and philosophical levels.

  1. (a)

    Experimental An increased number of inconsistencies have been collected within the SMT paradigm. In order to accommodate those contradictory results, most concepts and results borrowed from experiments centered on cell-microenvironment models have been introduced aiming at correcting SMT rather than overtly rejecting it (Bissell and Radisky 2001; Laconi 2007). Yet, even these attempts have failed to provide a rigorous explanation. It is then time to abandon the ‘oncogene paradigm’ and move on (Bizzarri et al. 2008; Sonnenschein and Soto 2000).

  2. (b)

    Epistemological Scientific theories need to be tested and, if falsified, be discarded (Ayala 1968). Yet, by merging two distinct frameworks, it would result in an epistemological cul-de-sac, given that such an approach would impede the identification of either useless or useful data.

  3. (c)

    Philosophical From a philosophical point of view, SMT and TOFT conceive the causality principle in an opposite manner. For SMT causality resides only within the genome. TOFT, instead, posits that causality relies in non-linear dynamics, involving several components and different levels of causality, spanning from the molecular one to the tissue level. These differences between TOFT and SMT are indeed a specific case of the opposition existing between a ‘reductionist’ and an ‘organicist’ approach. Consequently, “the projection of the controversy on a metaphysical ground” rather than “specious and incoherent”—as claimed by Bedessem and Ruphy (2015)—becomes wholly justified.

Obviously, adhering to one of the paradigms does not imply that ‘raw’ data gathered by experimental studies based on the faulty one (in this case, SMT) must be discarded. Instead, those results can be re-interpreted according to the new paradigm within which they are likely to acquire a ‘different meaning’ (Baker 2014). Yet, the chasm between the two theories cannot be covered by following a strategy that insists on expanding the search for elusive ‘oncogenes’ and/or ‘regulatory pathways’ (Huang 2004). Addressing cancer complexity does not imply asking for more sophisticated mathematical models, or futuristic technologies. Instead, as happened at the birth of thermodynamics, a more ‘coarse grain’ attitude may have better chances in providing a reliable comprehension, by integrating observations at different levels and providing new insight regarding the principles on which biological organization is dynamically shaped (Bizzarri et al. 2013; Longo and Montévil 2014).

Finally, these two concurring factors—irreconcilability and accumulation of contradictory results—may explain why paradigm shifts encounter resistance to change from the old guard.

6 Conclusions

Whereas SMT encompasses irresolvable conundrums, TOFT is gaining momentum, as testified by the growing interest earned by scientists worldwide (Baker 2015; Cooper 2009). The main issue on the agenda, as repeatedly requested by Soto and Sonnenschein (2011), is to submit SMT to verification. There is, after all, an ethical issue embedded into the structure of science itself, one that is often ignored by governmental and corporate structures as funders of research. This issue includes the imperatives to seek evidence for disproving one’s hypothesis (Popper 2002), and to consider the whole, and not just selective evidence (Whithehead 1925). To meet this challenge it will need a steady support of adequate resources, a realistic management of the hype that has surrounded the cancer field, and a humble attitude toward the years spent following false leads.