Keywords

In the past decade, a lot has happened in the realms of psychiatric classification, personality assessment, and neuroscience. One of the more important developments happened in 2010, when representatives of the National Institute of Mental Health (NIMH) initiated the Research Domain Criteria (RDoC), a new research framework for studying mental disorders (Cuthbert & Insel, 2013; Cuthbert & Kozak, 2013; Insel et al., 2010; Kozak & Cuthbert, 2016). A central tenet of RDoC is the conceptualization of mental disorders as brain disorders (Insel et al., 2010). The RDoC was developed as a response to increasing concerns (see Cuthbert & Insel, 2010; Insel et al., 2010) about the most recent version of the premiere psychiatric classification system – the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; 2013), produced and published by the American Psychiatric Association (APA). For a review of the development of the different DSM editions, see Blashfield, Keeley, Flanagan, and Miles (2014). To a certain extent, the DSM and RDoC present clashing diagnostic approaches (Lilienfeld & Treadway, 2016; Zachar & Kendler, 2017), insofar as they differ in fundamental assumptions about the nature of psychopathology. With that said, it should be noted that the RDoC is more research oriented, while the DSM serves the broader purpose of having clinical application (cf. Lilienfeld & Treadway, 2016). The goal of this chapter is to describe (a) the conceptualization of personality disorders (PDs) from DSM-III (APA, 1980) and forward, including the differences between categorical and dimensional models; (b) some of the fundamental differences between the DSM-5 and RDoC perspectives; and (c) conceptual and psychometric challenges for the RDoC framework along with a possible alternative to it, namely the network approach (Borsboom & Cramer, 2013). It should be noted that many of the developments in personality psychology, both normal and abnormal, have paralleled—and to some extent caused (Krueger, 2013)—developments in psychopathology research, seen more generally. While the focus of this chapter is on personality, many of the issues raised pertain to both psychopathology more generally and personality, more specifically.

The DSM-III Paradigm

For a long time, mental disorders have been thought of as natural categories, or what is often referred to as taxa (Meehl, 1992). Taxa refers to discrete differences between conditions, such that one is either schizophrenic or not. A typical example, taken from medicine, is that one is either pregnant or not, one cannot be a little bit pregnant. Hence, pregnancy is a taxonic variable. A dimensional model, on the other hand, posits that an individual can present with a disorder (e.g., schizophrenia) to some degree. Evidence suggests that most DSM disorders are dimensional in nature (Haslam, Holland, & Kuppens, 2012; see also Borsboom et al., 2016), but it is only in recent years that this fact has received significant attention (Krueger et al., 2018).

The prevailing paradigm for psychiatric classification was established in the DSM-III (APA, 1980), in which the number of mental disorder categories increased from 163 to 224. DSM-III also introduced two major features not present in previous nosologies: diagnostic criteria for each mental disorder category and a multiaxial system (Blashfield et al., 2014). Editions prior to DSM-III did not use explicit diagnostic criteria but instead relied only on prose definitions for each mental disorder, which led to poor inter-rater reliability (Blashfield et al., 2014). The multiaxial system was meant to be useful for providing a comprehensive diagnosis, as each patient was expected to be diagnosed on five separate axes (see Blashfield et al., 2014). The five axes described: (I) the presence of mental disorder categories (e.g., schizophrenia); (II) personality dysfunction and intellectual disability; (III) medical disorders relevant to the patient’s psychiatric presentation; (IV) stressors in the social environment; and (V) an assessment of overall adaptive functioning. The DSM-III approach is often referred to as neo-Kraepelinian (Blashfield, 1984), as it shared the philosophy of Emil Kraepelin (1856–1926), in that it moved away from psychoanalysis and toward the methods of traditional medicine. The DSM-III, like Kraepelin, emphasized signs (observable manifestations), symptoms (subjective reports), and natural history (trajectory over time), instead of psychoanalytic concepts which were more influential in DSM-I and DSM-II (see Blashfield et al., 2014; Lilienfeld, Smith, & Watts, 2013; Lilienfeld & Treadway, 2016).

The DSM-IV (APA, 1994) grew in size, but not much changed in the approach or underlying philosophy. One of the things that did change was a gradual move toward polythetic criteria, which, as opposed to monothetic criteria, means that signs and symptoms are neither necessary nor sufficient for diagnosis (Lilienfeld et al., 2013). Monothetic criteria are useful as they maximize homogeneity within PD categories, as all individuals in a diagnostic category share traits. Conversely, the polythetic approach leads to diagnostic heterogeneity. For instance, borderline PD includes 256 different criteria combinations, all yielding the same disorder. Remarkably, for some diagnoses in the DSM-IV, it is possible for two patients to have no overlap in diagnostic criteria (Lilienfeld et al., 2013). This is not unique to PD categories. For instance, post-traumatic stress disorder combinations far outnumber those of PDs (Young, Lareau, & Pierre, 2014). For a clinician to be expected (perhaps required) to provide the same treatment for individuals who share no overlap in diagnostic criteria clearly points to an inherent problem in the DSM-IV model. Accordingly, polythetic criteria have been criticized for creating overly heterogeneous groups (Krueger, 2013), while monothetic criteria suffers from the opposite problem, creating overly narrow groups, thus not allowing for conditional indicators that may not be present in all cases (Widiger & Frances, 1985). The issues surrounding both monothetic and polythetic criteria are well established (see, e.g., Cooper, Balsis, & Zimmerman, 2010; Pfohl, Coryell, Zimmerman, & Stangl, 1986).

The DSM-IV model consists of 10 PDs: paranoid, schizoid, schizotypal, antisocial, borderline, histrionic, narcissistic, avoidant, dependent, and obsessive-compulsive. There is also an 11th category: PD not otherwise specified (PD–NOS) which was meant to be used when none of the other PDs fit a patient’s symptoms. Because patients often do not fit neatly into PD categories, patients are either given multiple diagnoses (which is known as the problem of comorbidity, see Cramer, Waldorp, van der Maas, & Borsboom, 2010) or placed in the PD–NOS category (Verheul & Widiger, 2004). Both approaches cause substantial problems in clinical decision making: PD–NOS is unspecific and comorbidity dictates that an individual may fulfill criteria for multiple diagnoses, thus raising the question of which diagnosis to treat, and how.

DSM-5: Empirical Evidence, Opposition to Innovation, and Steps Forward

One major effort undertaken in the DSM-5 was the attempt at replacing the previous (i.e., DSM-IV-TR, text rev.; APA, 2000) categorical model of PD with a dimensional model. This development was predicated on accumulating evidence that PDs are better conceptualized (and modeled statistically) as continuous and not as discrete phenomena (e.g., Markon, Chmielewski, & Miller, 2011; Trull & Durrett, 2005). These findings ran in parallel with the insight that taxonomies of normal personality could help shape models of pathological personality (see, e.g., Markon, Krueger, & Watson, 2005; Widiger & Mullins-Sweatt, 2009; Widiger & Trull, 2007).

In the traditional medical disease model of older DSM versions, diagnoses are discrete categories, meaning that a patient with major depression is qualitatively different from someone who is not suffering from major depression. In a dimensional model, there is a quantitative, continuous difference in level of depression, meaning that the boundary between normal and abnormal is inherently fuzzy. Ultimately, the effort to replace the categorical model fell short and the DSM-IV-TR personality disorder section was copied verbatim into the DSM-5.

The proposed dimensional model was placed in a separate section called “Emerging Measures and Models” (APA, 2013, p. 729). Plenty has been written about issues within the DSM-5 task force (Frances, 2009; Spitzer, 2009), the inner workings of the Personality Disorder Work Group (Gunderson, 2013; Krueger, 2013; Skodol, Morey, Bender, & Oldham, 2013; Zachar, Krueger, & Kendler, 2016), including the resignation of two Work Group members (Livesley, 2012; Verheul, 2012), and finally reflections about the aftermath and efforts to improve future revisions of psychiatric classification systems (First, 2014; Gunderson, 2013; Lilienfeld, 2014a; Miller & Lynam, 2013; Mullins-Sweatt, Lengel, & DeShong, 2016; Pincus, 2013; Widiger & Crego, 2015). The intricacies of both the successes and failures of the DSM-5 PD framework is highly important, but the utility of the dimensional model, including its consequences for the RDoC framework, is of greater relevance presently.

Dimensions of Normal and Abnormal Personality

During the DSM-5 revision process, one concern was that much of the literature reviewed in support of a dimensional model contained studies of normal and not clinical populations. Much of this evidence, undoubtedly, relied on the Big Five (Goldberg, 1990) or Five Factor Model (FFM; Costa & McCrae, 2017; McCrae & John, 1992), which includes five broad bi-polar personality domains: extraversion, agreeableness, openness to experience, neuroticism, and conscientiousness. Together, these domains are both impressive descriptive taxonomies of personality phenotypes and reliable predictors of a great variety of consequential outcomes (Grucza & Goldberg, 2007; Ozer & Benet-Martinez, 2006). The FFM facilitates individual profile scores, meaning that an individual is assessed on each of the five dimensions and is placed somewhere on a normal distribution. This procedure creates a more refined picture of an individual’s personality than placement in a discrete category. In other words, knowing that one belongs to the 87th percentile in extraversion is more informative than being placed in a category of “extraverts,” into which everyone with extraversion scores above the mean are placed. Translated into the language of psychopathology, knowing that a patient belongs to the 95th percentile on a specific diagnostic marker (e.g., trait antagonism) provides much more substantive information (cf. Markon et al., 2011) than the dichotomous information that a patient does or does not present with a diagnostic marker.

In commonly used personality inventories, the five domains are further divided into facets. These facets represent personality at a more detailed level of analysis. In recent years, describing personality at different levels of abstraction has become increasingly common, with analyses of “nuances,” which refers to individual questionnaire items (Mõttus, Kandler, Bleidorn, Riemann, & McCrae, 2017), aspects (DeYoung, Quilty, & Peterson, 2007), and higher-order factors, or so-called metatraits, such as alpha-beta (Digman, 1997), stability-plasticity (DeYoung, 2006), and the general factor of personality (Musek, 2007; Rushton & Irwing, 2008). A lot of progress has been made in trying to understand how these different levels of abstraction relate to both normal and abnormal personality (Krueger & Markon, 2014; Markon et al., 2005; Wright & Simms, 2014).

During the process of updating the DSM-5, the Personality Disorder Work Group ultimately proposed a model inspired by the FFM, which included five broad maladaptive domains (with adaptive Big Five equivalents in parentheses): negative affectivity (neuroticism), detachment (introversion), antagonism ((dis)agreeableness), disinhibition (low conscientiousness), and psychoticism (openness to experience). In addition to these five broad domains, the model is further divided into 25 facets in total. During the process of validating this model, an instrument called the Personality Inventory for DSM-5 was created (Krueger, Derringer, Markon, Watson, & Skodol, 2012; Suzuki, Samuel, Pahlen, & Krueger, 2015), which over the years has been proven to be psychometrically sound (Hopwood, Thomas, Markon, Wright, & Krueger, 2012; Morey, Krueger, & Skodol, 2013; Thomas et al., 2013), and more clinically useful than the DSM-IV model (Morey, Skodol, & Oldham, 2014). It should also be noted that substantial effort has been put into integrating normal and abnormal personality models, both historically (e.g., Cloninger, 1987; Cloninger, Svrakic, & Przybeck, 1993; Eysenck, 1947), and more recently (Conway, Latzman, & Krueger, 2019; Markon et al., 2005; Widiger & Simonsen, 2005), which has ultimately yielded FFM-based models that describe both normal and abnormal personality functioning across multiple levels of abstraction (Caspi et al., 2014; Hengartner, Ajdacic-Gross, Wyss, Angst, & Rössler, 2016; Kendler et al., 2017; Krueger & Markon, 2014; Mõttus et al., 2017).

Research Domain Criteria

When in 2010, the NIMH launched RDoC, they particularly emphasized that mental disorders should be viewed as brain disorders (Insel et al., 2010). Furthermore, it was assumed that these disordered circuits are identifiable using neuroscientific tools, such as electrophysiological measures, functional neuroimaging, etc. With what was arguably a failure of the DSM-5, the RDoC project garnered a lot of interest and in the years since has generated a number of studies across multiple scientific disciplines (see, e.g., Carcone & Ruocco, 2017).

The RDoC mission is relatively fluid, insofar as it is not looking to replace current psychiatric nosologies but rather allow researchers to study psychopathology within a given framework. Currently, this framework contains five broad domains: negative valence systems, positive valence systems, cognitive systems, systems for social processes, and arousal/regulatory systems. Each domain can be studied across seven units of analysis (or increasing levels of abstraction): genes, molecules, cells, circuits, physiology, behavior, and self-reports. Together, the domains and units of analyses create a two dimensional matrix (see, NIMH, 2017). The RDoC proposal thus allows researchers to study alternative models of psychopathology without being constrained by the traditional categorical system of the DSM (for pros and cons of diagnostic nosologies, see Markon, 2013; Zachar, 2013).

Issues Facing the RDoC Framework

In an excellent overview, Lilienfeld (2014b) presented some of the advantages the RDoC holds over the DSM, but also enumerated a number of methodological and conceptual challenges for the RDoC framework (see also Wakefield, 2014). Lilienfeld (2014b) posited four challenges: (a) an overemphasis on biological units and measures; (b) neglect of measurement error; (c) biological and psychometric limitations of endophenotypes; and (d) distinguishing biological predispositions from their behavioral manifestations. I briefly reiterate Lilienfeld’s (2014b) challenges (c) and (d), and discuss (a) and (b) in more detail in the subsequent sections.

First, the term endophenotype refers to an internal process which can be objectively measured (Gottesman & Gould, 2003). Examples include biochemical markers, findings from brain imaging, or various neuropsychological tests (Lilienfeld, 2014b). Endophenotypes are perhaps more easily understood when distinguished from exophenotypes, which refers to the more traditional signs and symptoms (e.g., behaviors) found in DSM criteria. The rationale for using endophenotypes was to fill the gap between genes and the disease process, as endophenotypes are assumed to be closer to the gene than exophenotypes (e.g., behaviors) (Gottesman & Gould, 2003). There are many issues with endophenotypes (see, e.g., Lilienfeld et al., 2013; Lilienfeld, 2014b), but the main concern may be that endophenotypes seem to be as complex as behavioral traits (Iacono, Malone, & Vrieze, 2017), which naturally limit their utility. However, these issues are just that, issues, and should not be seen as problems severe enough not to merit further research. In fact, for a more optimistic view regarding endophenotypes, see Miller, Rockstroh, Hamilton, and Yee (2016).

The next challenge, that of distinguishing biological predispositions from their behavioral manifestations, refers to a distinction between basic tendencies and characteristic adaptations. This distinction is meant to highlight a difference between, for instance, one’s level of neuroticism (i.e., a basic tendency), and how one adapts to one’s own neuroticism (i.e., the characteristic adaptation). Lilienfeld (2014b) writes that “an individual with high levels of negative emotionality may manifest this predisposition in an anxiety disorder; alternatively, she may manifest it in artistic productivity, which is associated with a disposition toward negative emotionality” (p. 135). This distinction has consequences for the RDoC, as similar predispositions may yield very different behavioral manifestations (cf. the concept of multifinality in the subsequent section). Progress in the RDoC framework may be impeded as physiological risk factors of psychopathology need to be separated from the psychopathology itself (Lilienfeld, 2014b; Wakefield, 2014).

Biological Overemphasis

As mentioned previously, the RDoC conceptualizes mental disorders as “disorders of brain circuits.” At this point in time, it has become axiomatic that all psychological phenomena are mediated by the brain, but as Lilienfeld (2014b) notes, biological mediation is not the same thing as biological etiology. A mental disorder may be caused by (have etiological roots in) environmental factors at time T1 and manifest neurologically at time T2. Neither structural nor functional imaging at T2 will shed light on the cause(s) at T1. The psychometrician Frederick Lord is sometimes paraphrased as having said that “the numbers don’t know where they came from” (Lord, 1953). This is true for all kinds of data, including data collected using neuroscientific methods. It is up to the researcher to provide an explanatory framework for how the collected data is best explained.

Furthermore, Lilienfeld (2014b) notes that data drawn from only the self-report domain fall outside the RDoC approach (Cuthbert & Kozak, 2013). This is potentially a problem as, in many cases, self-report questionnaires provide higher validity than biological measures. For instance, Iacono et al. (2017, p. 117) maintain that “there are no biomarkers that can be used clinically to confirm a diagnosis or identify a given individual as at risk, and it is not clear that the candidate biomarkers that exist do a better job identifying cases than existing interview methods.” Accordingly, it seems reasonable to complement, but not supplant, self-report, and interview methods with other methods (e.g., physiological measures), but these methods too need to be thought of as fallible indicators.

It has long been known that developmental trajectories are not necessarily linear nor causally deterministic (Cicchetti & Blender, 2004). The concepts of equifinality and multifinality reflect this realization. Equifinality refers to the fact that multiple pathways can lead to the same outcome and multifinality reflects that the same pathway may lead to multiple outcomes (Cicchetti & Toth, 2009). Similar concepts exist in the cognitive neuroscience literature, namely, that different brain structures may process information in ways that lead to similar outcomes and that similar processing may lead to different outcomes, depending on the brain structure (see, e.g., Sarter, Berntson, & Cacioppo, 1996). Crucially, the study of psychopathology needs to consider not only multiple levels of explanation but also how information across multiple levels interact over time. This idea traces back to the Cattell data box (Cattell, 1946), in which people and measures (e.g., personality items) were modeled across time, thus allowing both between subject and within subject comparisons. This idea never caught on in mainstream research, but has recently been revived (e.g., Molenaar & Campbell, 2009), and put to empirical use (Wright et al., 2017).

Measurement Error

A final concern is the potential neglect of measurement error. Lilienfeld (2014b) captures some of the core issues inherent in an undue reliance on neuroscientific methods, for instance, the modest test-retest reliability of functional magnetic resonance imaging (fMRI) (Bennett & Miller, 2010; see also Friedman et al., 2008)—one of the most commonly used tools in neuroscientific research. While these issues are obviously very important, I will focus on a related idea which is consistent with RDoC approach, namely, that psychiatry should move into the laboratory in order to be more like medicine (see Widiger & Clark, 2000, for a review). This is an interesting idea, which I believe ties into Cronbach’s (1957) classic distinction between two historic streams of psychology that are arguably present to this day: experimental and correlational psychology. Experimental psychology is mainly concerned with manipulating variables in order to understand some process. Correlational psychology is mainly concerned with phenomena that we cannot manipulate, for instance, individual differences in personality, and therefore focus on describing such differences.

This division was lamented by Cronbach, who believed that combining the two streams of psychology could eventually lead to a unification of psychology. Others take the view that this distinction emerges as a consequence of the fundamentally different levels of explanation on which experimental and correlational psychologists operate (Borsboom, Kievit, Cervone, & Hood, 2009; see also Hedge, Powell, & Sumner, 2018). The core issue is whether it is possible, and if so, how to integrate findings from multiple levels of explanation in the RDoC matrix (e.g., genes, physiology, behavior, and self-reports), being that experimental tasks and questionnaires intended to measure the same construct, may not produce reliable individual differences (Hedge et al., 2018). Laboratory measures have been known for a long time to be unreliable (Epstein, 1979), which is a consequence of large measurement error and high situational specificity. This fact does not seem to be unique to experimental psychology, as it is also present in contemporary neuroscience (Enkavi et al., 2019; Hajcak, Meyer, & Kotov, 2017; Luking, Nelson, Infantolino, Sauder, & Hajcak, 2017).

To exemplify this problem, consider that both behavioral and self-report measures of impulsivity independently predict impulsive behaviors, but the relation between self-report and behavioral tasks is low (Sharma, Markon, & Clark, 2014), which suggests a lack of coherence between these two levels of explanation. Similarly, there seems to be a nonrelation between laboratory and self-reported empathy (Melchers, Montag, Markett, & Reuter, 2015) which may explain why there is an unexpected nonrelation between self-reported empathy and aggression (Vachon, Lynam, & Johnson, 2014). Both impulsivity and empathy are highly important for various personality disorders (e.g., narcissistic and antisocial) and have been subjected to a lot of research. Nevertheless, the current status of impulsivity and empathy is that self-report and behavioral tasks do not accurately measure the same phenomena. This is a critical problem for the RDoC, as findings from one level (e.g., laboratory/behavioral) may not correspond meaningfully with findings from another level (e.g., self-report).

There are many reasons for why these effects may occur. One may be the fundamentally different types of data (Borsboom et al., 2009; Hedge et al., 2018), or because a construct has been insufficiently defined or delineated (cf. Palminteri & Chevallier, 2018; Rossiter, 2005). Researchers must be aware that constructs may behave very differently across different levels of abstraction. Using multiple indicators is a step in the right direction, but a combination of a priori theorizing (Vaidyanathan, Vrieze, & Iacono, 2015), as well as the use of latent variable modeling (e.g., Patrick et al., 2013) is preferred, as it provides the opportunity to analyze different sources of variance in a theory-driven way. Even so, it was recently argued that clinical neuroscience may be lagging behind because of a failure to consider the basic measurement properties of the neural measures themselves (Hajcak et al., 2017). For instance, the internal consistency of fMRI measures is very rarely reported (Luking et al., 2017). Some consideration should also be given to the different time domains of laboratory and self-report measures. During collection of neuroscientific data, there are many sources of noise that needs filtering: the influence of physical movement, eye-blinks, heart rate, and deglutition can be major (Huettel, Song, & McCarthy, 2009; Luck, 2014). Such influences are not present in self-reports, where other types of noise are of greater concern, including method variance (Podsakoff, MacKenzie, & Podsakoff, 2012), and social desirability (Paulhus, 2002). Whether these measurement issues can be resolved by improving the conceptual domain (i.e., improving the conceptualization of constructs) or whether discrepant results are caused by fundamental differences in the nature of collected data remains to be seen. One may hope that finding such discrepancies paves a way toward more coherent constructs and improved research (see Cooper, Jackson, Barch, & Braver, 2019 for a measurement perspective on these issues). The importance of appropriate construct validation was recently described in a systematic and accessible fashion (Tay & Jebb, 2018).

The Network Approach

The RDoC introduces a more nuanced dimensional view of psychopathology than what the DSM approach offers, but there are other alternatives as well. The RDoC ultimately posits common biological causes for diagnostic categories, whereas the recently developed network approach conceptualizes mental illnesses in terms of causal dynamics between the symptoms themselves. In other words, in the network approach (Borsboom & Cramer, 2013; Schmittmann et al., 2013) the focus of analysis has shifted from the latent variables to the network itself. In conventional latent variable theory, the construct “depression” refers to a latent phenomenon which causes manifest signs and symptoms (indicators), such as sleep loss, disordered eating habits, suicidal thoughts, etc. In the network view, depression refers to the state of a system, and the system is not a latent variable but the indicators themselves (Borsboom & Cramer, 2013). In other words, depression emerges from the fact that an individual suffers from sleep loss, disordered eating habits, etc. One of the benefits of the network approach is that it removes some of the conceptual issues surrounding the ontological status of mental disorders (Borsboom, Mellenbergh, & Van Heerden, 2003; 2004). The question regarding the status of the manifest signs and symptoms are believed to be easier to explain: sleep loss is sleep loss. Network theory does not need to invoke an explanatory variable that in turn explains the presence of sleep loss (i.e., a latent variable). Nevertheless, it should be noted that the network approach does not abandon latent variables completely, insofar as it also allows for hybrid models. For example, mental disorder onset may have a common cause and the maintenance of the disorder could be governed by the interactions between symptoms (Fried & Cramer, 2017).

Network theory also deals with comorbidity in an interesting way (Cramer et al., 2010). In the latent variable view, the relation between a major depressive episode (MDE) and generalized anxiety disorder (GAD) is either that the two share a root cause, or that a patient has two simultaneous disorders. In the network view, comorbidity may arise because the two disorders share common symptoms. Borsboom, Cramer, Schmittmann, Epskamp, and Waldorp (2011, p. 1) provide the following example: “It is feasible that comorbidity between MDE and GAD arises from causal chains of directly related symptoms, e.g., Sleep deprivation (MDE) → fatigue (MDE) → concentration problems (GAD) → irritability (GAD).” Accordingly, network theory offers new creative solutions to old problems. The network approach is not necessarily incommensurable with the either the DSM or RDoC approaches. The network approach does, however, offer a different conceptualization of psychopathology than the other frameworks (see Borsboom & Cramer, 2013; Fried & Cramer, 2017).

Conclusion

There has been significant progress in psychopathology and personality research in recent years. In summary, the empirical evidence for the superiority of a dimensional classification system over the traditional categorical system used in the DSM is overwhelming. It seems probable that future nosologies (e.g., DSM-6) will utilize dimensional classification, in particular because of its greater reliability and validity. Whether this change will only entail personality disorders or include the entire range of psychopathology (see, e.g., Kotov et al., 2017) remains to be seen. Until then, the RDoC is a promising research framework, but also a difficult undertaking. A major concern is whether it is possible to mesh findings from different levels of explanation. Finally, the network approach offers an interesting alternative way of conceptualizing mental disorders. It has already generated a fair amount of discussion (cf. Cramer et al., 2010), but its ultimate standing in psychopathology research remains to be seen.