Diagnosis plays a central role in the treatment of psychological distress. The assignment of a diagnosis from the diagnostic and statistical manual of mental disorders (DSM; American Psychiatric Association 2000) is often required by clinics and third-party payers to authorize treatment. Diagnoses can also aid in treatment planning, as many interventions are designed for specific diagnostic groups. This particular role of diagnosis is becoming increasingly important as many mental health organizations are turning to diagnosis-specific evidence-based treatments (EBTs) to improve service quality (e.g., Chorpita and Donkervoet 2005; Jensen-Doss et al. 2009). Finally, there is some evidence that an accurate diagnosis is an important precursor to treatment success (e.g., Jensen-Doss and Weisz 2008; Pogge et al. 2001).

Despite their importance, questions have been raised about the accuracy of the diagnoses generated by clinicians practicing in community settings. A recent meta-analysis of agreement between clinician-generated diagnoses and standardized diagnostic interviews (SDIs), research instruments with standardized sequencing of questions and algorithms to assign diagnoses, found that the average agreement for child diagnoses is kappa (κ) = 0.39 (Rettew et al. 2009), which is considered “poor” agreement (Landis and Koch 1977). Furthermore, studies comparing SDIs and clinician-generated diagnoses to external indicators of validity have found stronger support for the accuracy of SDIs (e.g., Basco et al. 2000).

In part because of these data, guidelines have been generated for evidence-based assessment (EBA) of adult (Hunsley and Mash 2005) and child (Mash and Hunsley 2005) psychopathology, typically providing suggestions for the domains to be assessed, the informants to involve, and the methods to gather data. Traditionally, the predominant diagnostic method has been the unstructured diagnostic interview (UDI), in which clinical judgment is used to guide question-asking and interpretation of information. Unfortunately, research has documented several information-gathering biases that could negatively impact the validity of this approach (Angold and Fisher 1999; Garb 1998, 2005). It is not surprising, therefore, that none of the EBA guidelines for child psychopathology specifically identify the UDI as an essential diagnostic method. Two other methods are consistently recommended. First, standardized rating scales are recommended for diagnostic screening (Klein et al. 2005; McMahon and Frick 2005; Silverman and Ollendick 2005; Youngstrom et al. 2005) or sometimes as a method of directly assessing diagnostic criteria (Pelham et al. 2005). Second, SDIs are often recommended for use in the second stage of in-depth assessment, following initial screening (Klein, et al.; McMahon and Frick; Silverman and Ollendick), although questions have been raised about the utility of currently available SDIs for disorders such as attention-deficit hyperactivity disorder (Pelham et al. 2005) and pediatric bipolar disorder (Youngstrom et al. 2005).

Available data suggest that usual care clinician diagnostic methods often do not map onto these EBA recommendations. Surveys indicate that the UDI is the assessment method used most often by psychologists (e.g., Cashel 2002). A survey of psychiatrists revealed similar practices, with a majority of respondents saying they never use standardized assessment tools for case identification (Gilbody et al. 2002).

These data raise questions about why clinicians use the diagnostic methods they do. Some have suggested that pressures to assign a diagnosis for authorization of services play a role (e.g., Jensen and Weisz 2002). Indeed, several studies have documented that clinicians are more likely to assign diagnoses to clients who they believe will pay using managed care than to self-pay clients (e.g., Lowe et al. 2007; Pomerantz and Segrist 2006). Other data suggest that some clinicians might also feel pressured to under-diagnose clients; surveys of psychiatrists (Setterberg et al. 1991) and counselors (Mead et al. 1997) suggest that deliberate under diagnosis to avoid stigma may be a common practice.

These external influences on the assignment of diagnoses raise questions about whether clinicians feel that the assignment of an accurate diagnosis is clinically useful. If clinicians do not feel diagnosis is valuable for treatment planning, it follows that they would not invest significant effort in the diagnostic process. Studies assessing clinicians’ views of the DSM suggest this could be the case. Frazer et al. (2009), for example, found that social workers said that they primarily used the DSM for billing purposes. Similarly, surveys have shown that fewer than half of social workers (Kutchins and Kirk 1988), psychiatrists (Jampala et al. 1992) and psychologists (Miller et al. 1981) say that the DSM is useful for treatment planning. If clinicians are primarily using diagnosis for purposes such as authorization of services, that simply requires the assignment of any diagnosis, or one of a specific set of diagnoses, selecting an accurate diagnosis among these diagnostic options may seem less useful.

Finally, even if clinicians do feel diagnosis is clinically useful, it is possible that they face barriers to using standardized diagnostic tools, or have concerns about the usefulness of the tools themselves. Little data exist to test this possibility, particularly focused on diagnostic methods specifically. Studies on clinician use of standardized outcome measures suggest that practical concerns, such as paperwork burden, also influence measure selection (Garland et al. 2003; Hatfield and Ogles 2007), as do concerns about the measures themselves, such as their relevance to subgroups of clients like ethnic minorities.

A recent, large, multidisciplinary survey examined child clinicians’ attitudes toward standardized assessment tools in general, and examined their relation to self-reported use of a range of standardized assessment tools (Jensen-Doss and Hawley 2010). Clinicians provided ratings of their views toward the psychometric qualities of these tools, their benefit over clinical judgment alone, and their practicality. Doctoral-level clinicians and psychologists provided more positive ratings on all three scales than master’s-level clinicians and non-psychologists respectively, although degree was no longer significant when it was tested simultaneously with discipline. Private practitioners also had less positive ratings on the psychometric quality and clinical judgment scales than other providers. All three attitude scales were predictive of self-reported standardized assessment tool use, although practical concerns were the strongest, and the only independent, predictor of use.

The present study provides follow-up analyses on this survey sample, focusing specifically on clinicians’ diagnostic attitudes and practices. Prior studies have not simultaneously examined clinicians’ attitudes toward the utility of diagnosis and of specific diagnostic tools, so it is not clear which of these factors is more related to diagnostic practices. Understanding these attitudes would help inform efforts to encourage clinicians to engage in more evidence-based diagnosis. Consistent with the theory of planned behavior (Ajzen 1991), meta-analyses suggest that attitudes toward a given behavior are a good predictor of an individual’s intention to perform the behavior and that intentions are subsequently related to behavior (Ajzen; Armitage and Conner 2001; Godin and Kok 1996), suggesting that improving clinicians’ diagnostic attitudes could lead to improved diagnostic behaviors through the pathway of increased intention.

Understanding clinician characteristics associated with these attitudes would also help training efforts be more targeted. Initiatives to target diagnostic practices within organizations would particularly benefit from data on differences in attitudes between clinicians with different levels of training and from different disciplinary backgrounds, variables often associated with different roles in organizations. Prior surveys on diagnosis have typically only included clinicians from one discipline, and have generally not compared clinicians with different levels of training. In addition, given the different reimbursement and authorization requirements faced by private practitioners relative to agency employees, it is possible that private practitioners might differ in their views of the utility of diagnosis, as well as their views of specific diagnostic tools. Also, clinician concerns about the utility of standardized assessment tools for diverse clients (Garland et al. 2003), clinicians who see more clients from low socioeconomic or minority status background might have less positive views of standardized diagnostic tools. Finally, exploring the relation between therapist demographic characteristics and attitudes would help identify clinicians who might be more or less open to training efforts.

The present study was designed to examine the attitudes that might underlie the diagnostic practices of child-serving clinicians. Utilizing a large, national, interdisciplinary sample, we examined clinician views about the utility of diagnosis and clinician views regarding standardized diagnostic instruments. We also examined the relation between these attitudes and therapists’ demographic (age, gender, ethnicity), professional (degree, discipline), and practice characteristics (private practice setting, proportion low income clients, proportion ethnic minority clients). Finally, we examined the association between these attitudes and the frequency of therapists’ self-reported diagnosing of clients and use of SDIs and standardized checklists.

Method

Participants

Participants were 1,678 child-serving counselors (19.8%), marriage and family therapists (MFTs; 17.2%), social workers (18.3%), psychologists (26.7%), and psychiatrists (18.0%) who participated in a survey about youth clinical care (Jensen-Doss and Hawley 2010). The sample was primarily female (63.8%) and Caucasian (90.3%) and was nearly evenly divided between master’s (47.3%) and doctoral level (52.7%) providers. Table 1 details participant characteristics.

Table 1 Provider characteristics

Procedures

The tailored design method (Dillman 2000) was used to develop the survey. The survey was then piloted with a sample of 14 mental health providers. Providers were asked to complete the survey, provide comments in an open-ended section, and participate in focus groups to provide feedback regarding the measure (e.g., Do the items include jargon not commonly used by all disciplines?) and the planned recruitment methods. A second randomized pilot study of 500 providers was then used to determine the optimal level of survey incentives, indicating that a non-contingent $2 bill was the most cost-effective incentive (Hawley et al. 2009).

The final survey was mailed to 5,000 mental health providers, including 1,000 from each of five professional organizations (American Counseling Association; American Association for Marriage and Family Therapy; American Academy of Child and Adolescent Psychiatrists; American Psychological Association; National Association of Social Workers). These organizations provided mailing lists of random, representative membership samples. Individuals within the US who indicated clinical interest with children, adolescents or families were randomly selected for participation. Individuals who received the survey, but who did not provide services to youths, were asked to indicate this and return the survey uncompleted.

The final survey mailing featured several aspects of the tailored design method (Dillman 2000) in order to increase participation rates, including (a) personally addressed and hand-signed cover letters with first class stamps on both the individually addressed outgoing envelope and the return envelope, (b) unique survey appearance with landscape orientation and photographs of children to help the survey stand out from other mail, and (c) formatting features designed to help participants complete and return it with ease (e.g., important words were bolded or italicized and each section of the survey was grouped using borders). Clinicians received up to five mailings. The first was a pre-notice letter of the upcoming survey. The second included the survey, a $2 bill, and return envelope. The third consisted of a postcard that thanked those that had returned the survey and reminded non-respondents to participate. The fourth was sent to non-respondents only and included another copy of the survey. The fifth was sent to any final non-respondents and consisted of a third copy of the survey.

Of the 5,000 individuals selected for participation, 347 (6.9%) had incorrect addresses. Of the 4,653 individuals successfully contacted, 2,863 (61.6%) responded [1,639 (35.2%) did not respond and 151 (3.2%) declined participation]. Of the responders, 1,143 (37.9%) indicated they did not provide services to youths. Of the 1,720 remaining participants, six were excluded from this study because their highest degree was a bachelor’s degree, which we felt introduced unnecessary variability into the sample. Excluding these individuals, along with two others with missing degree data, yielded a final possible sample size of 1,712. 1,678 completed at least one of the survey items for the present study, leading to their inclusion in this sample. These individuals did not differ significantly from the 34 potential participants who did not complete any items on any of the demographic, professional, or practice characteristics listed in Table 1. Finally, items assessing clinicians’ diagnostic practices and their views of diagnostic tools were administered only to the 1,512 (90.1%) clinicians who indicated that they conduct assessments with youths presenting for treatment. These clinicians were less likely to be master’s level [χ2 (1, N = 1678) = 26.53, P < 0.001] or a counselor [χ2 (1, N = 1678) = 30.77, P < 0.001], and more likely to be a psychiatrist [χ2 (1, N = 1678) = 30.33, P < 0.001] than the 151 clinicians who indicated they did not conduct assessments. They did not differ on any other demographic, professional, or practice characteristics, or on their ratings on the utility of diagnosis scale (see measures).

Measures

Diagnosis Attitude Scales

Clinicians completed two diagnosis attitude scales which were written for the current study. Items were generated based on prior studies of clinician attitudes toward diagnosis (e.g., Jampala et al. 1992; Kutchins and Kirk 1988; Miller et al. 1981) and EBA (Garland et al. 2003; Gilbody et al. 2002) and were then subjected to the piloting procedures described above. All attitude items were rated on a five point Likert scale from 1 (Strongly disagree) to 5 (Strongly agree) and any negative items were reverse coded such that higher scale scores indicated more positive attitudes. Scales were scored by averaging items; participants missing more than two items on a given scale were treated as missing.

The utility of diagnosis scale consisted of five items assessing clinician attitudes toward the usefulness of diagnosis in treatment (e.g., “Accurate diagnosis is an important part of my treatment planning,” see Table 2). When the scale properties of these items were examined, the scale had somewhat low internal consistency (α = 0.62), which did not improve if items were omitted. However, given that this scale was short, which can impact the magnitude of alpha (e.g., Cortina 1993), we also examined the corrected inter-item correlations for this scale. All item correlations were above 0.30, exceeding the recommended cutoff of 0.20 for inclusion in a scale (Kline 1986).

Table 2 Descriptive statistics for the utility of diagnosis and standardized diagnosis scales and items

Participants who conducted assessments completed the standardized diagnosis scale, which consisted of seven items assessing opinions about standardized diagnostic tools (e.g. “Standardized measures help with accurate diagnosis,” see Table 2). Items were drawn from a broader measure of attitudes toward standardized assessment tools in general, the attitudes toward standardized assessment scales (ASA; Jensen-Doss and Hawley 2010) and had good internal constancy (α = 0.73).

Diagnostic Practices

The providers who indicated they conducted assessments were also asked to rate the frequency with which they assess for diagnoses on a scale from 1 (almost never) to 5 (almost always). Using the same frequency scale, they were also asked how often they use two types of standardized diagnostic tools: “Structured or standardized diagnostic interviews for child diagnosis” (SDIs) and “Standardized checklists for child/family symptoms or functioning.” They were provided with a definition and examples of each type of measure.

Data Analysis Plan

The study questions were addressed utilizing t-tests and multiple regression. Prior to analysis, the continuous variables were examined for outliers and normality. There were no univariate outliers. Two independent variables, % minority caseload and % low income caseload, were positively skewed and platykurtic; they therefore were dichotomized at the median (25% for both variables). Frequency of SDI use was also positively skewed. As it was desirable to keep this variable continuous in order to facilitate comparison to the analyses of checklist use, an inverse transformation was used to for this variable. Therapist ethnicity was also dichotomized into Minority (1) versus Caucasian (0), as the sample was more than 90% Caucasian.

Given our large sample size, an alpha of 0.001 was employed; this P value corresponded to effect sizes that fell below conventions for a “small” effect size, so this was deemed appropriate to control for Type I error without missing meaningful patterns in the data. Cohen’s (1988) d was used as the effect size indicator for comparisons of means, with d = 0.20 considered a small effect, d = 0.50 a medium effect, and d = 0.80 a large effect. For the multiple regressions, R2, or the proportion of variance explained, was used as the effect size indicator; R2 values of 0.02, 0.13, and 0.30 are considered small, medium, and large, respectively (Cohen 1988).

Examination of missing data indicated that the rate of missingness was 5% or fewer for all variables, suggesting that any missing data procedure would likely yield similar results (Tabachnick and Fidell 2007). The analyses were therefore conducted using pairwise deletion.

Results

Clinician Attitudes toward the Utility of Diagnosis

Participants provided a mean rating of 3.15 on a scale of 1 to 5 (SD = 0.71) on the Utility of Diagnosis scale. This was significantly different from a neutral rating of 3 [t(1633) = 8.49, P < 0.001], with the associated small effect size (d = 0.21) indicative of mildly positive views toward the utility of diagnosis. Examination of item responses presented in Table 2 shows that clinicians strongly endorsed the notion that accurate diagnosis is important in treatment planning (large effect size of d = 1.04), but also had moderately strong beliefs that most clients come to treatment for issues other than a diagnosis (d = 0.67). Clinicians also disagreed that under diagnosis is necessary to avoid stigma (d = −0.24). Other item effect sizes fell below the cutoff for a small effect.

Provider degree, discipline, and private practice setting were significant (P < 0.001) univariate predictors of this scale (Table 2). Doctoral-level clinicians provided higher ratings than masters-level clinicians, a small effect (R2 = 0.02). Professional discipline explained 4.9% of the variance in ratings on this scale, also a small effect. Psychiatrists provided higher ratings than all other disciplines (all P’s < 0.001). Private practitioners provided significantly lower ratings than professionals working in other settings (R2 = 0.022), a small effect.

When all predictors were entered into a single multiple regression analysis, professional discipline continued to be a significant predictor, with the pairwise comparisons between psychiatrists and the other disciplines remaining significant (Table 3). Private practice setting also remained significant, but degree was no longer significant. The collective set of predictors explained 6.5% of the variability in the utility of diagnosis scale, a small effect.

Table 3 Demographic, professional, and practice characteristics as predictors of attitudes

Clinician Attitudes toward Standardized Diagnostic Tools

Participants provided a mean rating of 3.39 on a scale of 1 to 5 (SD = 0.54) on the standardized diagnosis scale. The single-sample t-test comparing this mean to the neutral rating of three was significant [t(1453) = 27.57, P < 0.001] and associated with a medium effect size (d = 0.73), indicative of moderately positive views toward standardized diagnostic tools. Ratings on this scale were significantly higher than those on the diagnosis utility scale [t(1434) = 10.98, P < 0.001, d = 0.35]. Item responses indicated that participants strongly agreed that standardized measures help with accurate diagnosis (d = 1.18), detection of comorbidity (d = 0.94), and differential diagnosis (d = 0.83) and disagreed that standardized checklists lack utility because they do not map onto DSM criteria (d = −0.66; see Table 2). Other item effect sizes fell below the cutoff for a small effect.

Because this scale was only administered to participants who indicated they conduct assessment, it is possible that the sample’s ratings on this scale would have been more negative had the clinicians who choose not to do assessment had been included. To test the “worst case scenario” impact of this design feature, we conducted a follow-up analysis in which we assigned all clinicians who did not engage in assessment a value of 1(the most negative rating possible on the scale).Footnote 1 Using this approach, the sample mean on the standardized diagnosis scale decreased to 3.20 on a scale of 1 to 5 (SD = 0.82, d = 0.24), a small, rather than medium effect size, but still reflective of positive views toward standardized diagnostic instruments.

Provider degree, professional discipline, and private practice setting were significant (P < 0.001) univariate predictors of the standardized diagnosis scale (see Table 2). Doctoral-level clinicians provided higher ratings on this scale than masters-level clinicians, a small effect (R2 = 0.04). Professional discipline explained 8.1% of the variance in ratings on this scale, also a small effect. Psychologists provided more positive ratings than all other disciplines (all P’s < 0.001). Private practitioners also provided significantly lower ratings than professionals working in other settings, although the difference did not exceed the cutoff for a small effect size (R2 = 0.012).

When all predictors were examined simultaneously, professional discipline continued to be a significant predictor of attitudes toward standard diagnostic tools, with the pairwise comparisons between psychologists and the other disciplines remaining significant (Table 3). Private practice setting also remained significant, but degree did not. The collective set of predictors explained 12.1% of the variability in the diagnosis item, approaching a medium effect.

Relation of Clinician Attitudes to Diagnostic Practices

To examine the predictive validity of the attitude measures, we examined their individual and joint relation to diagnostic practices, including the frequency with which clinicians diagnose clients, use SDIs, and use standardized checklists, after controlling for clinician demographic (age, gender, ethnicity), professional (degree, discipline) and practice (private practice setting, low income caseload, ethnic minority caseload) characteristics. The results of these multiple regression analyses are presented in Table 4. Analyses utilizing the transformed versions of the diagnosis and SDI use variables yielded nearly identical results to analyses using the non-transformed data. The results using the non-transformed data are therefore reported, as they are more straightforward to interpret. When tested individually, the utility of diagnosis scale was a significant (P < 0.001), positive predictor of frequency of diagnosing (ΔR2 = 0.033), SDI use (ΔR2 = 0.008) and standardized checklist use (ΔR2 = 0.015), although only the relation with diagnosing exceeded the cutoff for a small effect size. The standardized diagnosis scale was also a significant (P < 0.001), positive predictor of frequency of diagnosing (ΔR2 = 0.012), SDI use (ΔR2 = 0.020), and checklist use (ΔR2 = 0.036), with the effects for the two instrument use variables falling in the small range.

Table 4 Regressions predicting frequency of diagnostic practices

When examined simultaneously, the two attitude items explained 3.4% of the variance in frequency of diagnosis, 2.1% of the variance in SDI use, and 4.2% of the variance in checklist use, all small effects. The utility of diagnosis scale was the only independent predictor of frequency of diagnosis. The standardized diagnosis scale was the only independent predictor of use of the two diagnostic tools.

Discussion

The present study examined attitudes toward the utility of diagnosis and toward standardized diagnostic tools in a large, multidisciplinary sample of child clinicians. Given the questions that have been raised about the accuracy of clinician-generated diagnoses, understanding the attitudes that underlie diagnostic practices might facilitate efforts to train clinicians in evidence-based diagnosis. On average, clinicians indicated that they see diagnosis as clinically useful and they have positive views toward the use of standardized tools in diagnosis.

Examination of item responses indicates some differences between the present results and what might be expected based on prior studies. For example, clinicians in this sample felt that diagnosis is an important part of treatment planning, which runs contrary to previous studies of clinicians’ views of the DSM (Jampala et al. 1992; Kutchins and Kirk 1988; Miller et al. 1981). This difference may be due to the present study asking about diagnosis generally, rather than the DSM specifically, or due to increasing acceptance of diagnosis by clinicians relative to those earlier surveys. Also unexpected based on prior studies (Mead et al. 1997; Setterberg et al. 1991), the clinicians sampled also disagreed that under diagnosis is necessary to avoid stigma. However, this difference may be due to item phrasing; previous studies asked whether respondents thought under diagnosis was common, not whether they agreed with or engaged in the practice.

Both attitude scales were predicted by clinician degree, discipline and private practice setting. However, only discipline and practice setting were significant independent predictors, suggesting the more positive views held by doctoral-level clinicians may be a function of discipline or practice setting. For both the utility of diagnosis and standardized diagnosis scales, private practitioners held more negative attitudes than providers working in other settings, such as schools or outpatient clinics. It may be that private practitioners are less likely to work with specific assessment requirements than providers working in agencies. Without agency mandates or an agency culture promoting diagnosis, private practitioners may simply have fewer opportunities to form positive attitudes toward these practices.

When disciplinary differences in the two attitude scales were examined, two different patterns emerged. For the utility of diagnosis scale, psychiatrists reported significantly more positive views than all other disciplines. Given the central role that the American Psychiatric Association has taken in creating and disseminating the DSM, it is not surprising that psychiatrists are the most likely to see diagnosing as a useful part of their practice. Conversely, psychologists were more likely than all other disciplines to provide positive ratings on the standardized diagnosis scale, which likely reflects the increased emphasis on assessment in psychology training programs and in psychologists’ job duties relative to other disciplines. The fact that non-psychologists, who are most likely to be serving as the front-line clinicians in many mental health agencies, are less likely to have positive attitudes toward these standardized diagnostic tools suggests that, even when agencies have administrative support for implementing these tools, these efforts may not be as welcome or understood by non-psychologist clinicians. Designing evidence-based diagnosis implementation efforts with an eye toward addressing the concerns of front-line clinicians may help increase the likelihood of their success.

Both attitude scales were positively associated with clinician self-reported diagnostic practices, including frequency of diagnostic assessment, SDI use, and standardized checklist use, even after controlling for provider personal, professional, and practice characteristics. When the two scales were examined simultaneously, however, they appeared to be independently related to different aspects of the diagnostic process. The utility of diagnosis scale was independently predictive of the frequency of diagnostic assessment, but not the selection of specific diagnostic methods, whereas the opposite was true for the standardized diagnosis scale. This pattern of findings suggests that clinician opinions about the clinical utility of diagnosis seem to be a distinct construct from their views of specific approaches to diagnosing. This indicates efforts to get clinicians to assess for diagnoses in general might benefit from addressing their concerns about the utility of that process, but efforts to target their choice of diagnostic tools might need to focus more on their views of the instruments themselves. For example, the present findings suggest that one barrier to convincing clinicians to assign diagnoses is their belief that most families come to treatment to work on problems of daily living, rather than a diagnosis. Training clinicians to integrate diagnostic information into a case conceptualization that also considers issues such as environmental stressors might help clinicians to see the utility of considering the role diagnoses might play in these problems of daily living.

The present study did have some limitations that warrant consideration in the interpretation of its findings. First, because of the branching format used in the larger survey, the standardized diagnosis scale was only administered to participants who indicated that they conduct assessments with youths presenting for treatment; ratings on this scale were therefore not available for the 10% of the sample who did not do so. These individuals did not differ on their ratings of diagnosis utility, but it is possible that they held more negative views of standardized diagnostic tools, a possibility that could not be tested. This means that the sample’s true views of standardized diagnostic tools may have been more negative on average than what was reported. However, our analyses suggested that, even if all of these non-assessors had reported the most negative views possible on this scale, the overall sample mean still would have been positive and reflective of a small effect size difference from neutral. Second, while a strength of the study is that it examined the link between diagnostic attitudes and behaviors, the reports of behaviors were limited to self-reports. Future studies would benefit from the incorporation of more objective information about clinician diagnostic behavior.

Third, while several predictors of attitudes were found, significant variability in attitudes remained unexplained, suggesting additional predictors exist that warrant inclusion in future studies. For example, perceived agency, supervisor, and colleague support of EBTs, clinician quality of training in EBTs, and institutional barriers (e.g., caseload, session length) have been found to be related to clinician attitudes toward EBTs (Jensen-Doss et al. 2009). Other practice characteristics might be also predictive of attitudes. For example, given that different measures exist for different types of problems and for different age groups, clinician attitudes might vary as a function of the presenting problems or age groups typically treated by the clinician.

Finally, the reliability of the utility of diagnosis scale was somewhat low. Low reliability can limit one’s ability to find significant results using a scale, given that there may be increased error variance. However, the impact of this appears to have been limited, given that an identical number of significant findings were obtained for this scale as for the standardized diagnostic scale.

Despite these limitations, this study had several strengths. To our knowledge, this is the first study to simultaneously examine attitudes toward diagnosis and toward diagnostic tools simultaneously, and the first to systematically examine predictors of these attitudes across a range of disciplines. The study also utilized a rigorous approach to developing the survey and recruiting participants, the tailored design method (Dillman 2000), which resulted in a response rate and sample size that exceeded the majority of previous surveys on clinician assessment practices. In addition, the rate of missing data was very low.

Diagnosis is playing an increasingly important role in the provision of mental health services, particularly as the field increasingly focuses on making clinical practice more evidence-based. Given that many clinicians may not generate diagnoses that match those assigned in research studies, additional focus on disseminating and implementing evidence-based diagnostic tools is likely needed. To the extent that clinicians are not engaged in evidence-based diagnosis, it may be difficult for clinicians to make appropriate use of research findings in their practice (Jensen and Weisz 2002). In particular, as clinicians attempt to use diagnosis-specific EBTs, providing therapists with training on these treatments without associated training in evidence-based assessment of psychopathology may lead clinicians to use EBTs with inappropriate clients or without all of the relevant clinical information needed for their effective use (Jensen-Doss and Weisz 2008; Weisz and Addis 2006). This could, in turn, lead to clinicians having a lack of success implementing these treatments and hinder future uptake of additional EBTs. The present data suggest that, while clinician views of the utility of diagnosis are associated with their diagnostic practices, their choice of diagnostic methods seems most strongly associated with their views of the tools themselves. Efforts to improve clinicians’ views of these tools in particular may help facilitate the success of future evidence-based practice efforts.