Introduction

Research suggests the importance of organizational characteristics as determinants of behavioral health service delivery for youth in the community (e.g., Aarons and Sawitzky 2006a, b; Glisson 2002; Glisson et al. 2010; Hoagwood et al. 2001; Rogers 2003). Among organizational factors, organizational culture and climate have been found to be particularly important. Although definitions of organizational culture and climate are variable (Verbeke et al. 1998), organizational culture can be defined as shared employee perceptions of norms, values, behavioral expectations and assumptions that guide employee behavior (Cooke and Rousseau 1988) whereas organizational climate refers to shared employee perceptions regarding the effect of the work environment on employees’ personal well-being (i.e., molar organizational climate; James et al. 1978)Footnote 1 and specific strategic or procedural outcomes (i.e., strategic climate; Ehrhart et al. 2014; Schneider et al. 2013). In children’s service systems, several domains of organizational culture and organizational climate have been associated with a host of important outcomes including clinician turnover (Aarons and Sawitzky 2006a; Glisson 2002; Glisson et al. 2008b), service quality (Glisson and Hemmelgarn 1998; Olin et al. 2014), youth behavioral health outcomes (Glisson and Green 2011; Glisson and Hemmelgarn 1998; Williams and Glisson 2014), clinician attitudes toward adopting evidence-based practices (Aarons and Sawitzky 2006b), self-reported implementation of evidence-based strategies (Beidas et al. 2015), and sustainment of new practices (Glisson et al. 2008b). The most commonly used measure of organizational culture and organizational climate in public children’s behavioral health and child welfare systems is the Organizational Social Context (OSC) measure (Glisson et al. 2008a, b, 2012).

Several decades of research on organizational culture and climate has produced important advances in the quantitative measurement of these constructs (Klein and Kozlowski 2000; Zyphur et al. 2016). One point of consensus includes recognition that organizational culture and climate are socially constructed, shared characteristics of the work environment (Ostroff et al. 2003; Verbeke et al. 1998). This suggests the importance of members of an organizational unit (e.g., organization, program, team) being in agreement with respect to their experience of culture and climate (Klein and Kozlowski 2000) and that valid and reliable inferences about an organization’s culture and climate require confidence in the extent to which observed scores reflect a shared reality amongst employees.

Important differences have emerged in the quantitative strategies used to measure organizational culture and climate which have important implications for research design, the validity of inferences, and potential organizational interventions. Guidelines for some measures instruct users to survey front-line employees from the targeted work unit(s) (i.e., organization). Mid-level managers and upper leadership are typically excluded from completing measures of culture and climate because it is believed that the experiences of front-line service providers are most germane for shaping service delivery processes. In contrast, guidelines for other culture and climate measures instruct users to sample only higher-level managers, administrators, or other leaders who are conceptualized as knowledgeable key informants (e.g., Cameron and Quinn 2011). This approach sometimes incorporates other key informants along with leaders into a consensus building process for developing culture and climate ratings. In this approach, key informants are construed as accurate raters of unit-level constructs. A third hybrid approach samples both administrators and clinicians and combines ratings of culture and climate from these two groups (e.g., Beidas et al. 2015). This approach relies on the assumption that administrators and clinicians provide equally useful and concordant information regarding an organization’s culture and climate.

Questions about whether leadership and front-line employees share perceptions of organizational culture and climate have emerged from this third approach. A number of terms have been used to describe this concept, including concordance,Footnote 2 agreement, discrepancy, and perceptual distance (see Gibson et al. 2009; Hasson et al. 2016). Empirical work in the broader organizational literature indicates there is a lack of concordance between leader and front-line employee ratings of organizational culture (Martin et al. 2006; Zyphur et al. 2016) in hospitals (Hansen et al. 2011; Hartmann et al. 2008; Rosen et al. 2010; Singer et al. 2009) and primary care clinics (Carlfjord et al. 2010). Importantly, emerging research suggests that investigation of concordance between leadership and front-line employee reports of organizational constructs is critical given that degree of concordance may influence important organizational outcomes. For example, greater leadership and employee concordance prior to an organizational intervention resulted in better organizational outcomes (Hasson et al. 2016). Given evidence from a meta-analysis that found that organizational culture does not exert the same effect in all industries (Hartnell et al. 2011), it is important to explore questions of concordance in specific industries.

To date, the concordance between administrator and clinician reports of organizational culture and climate in public behavioral health settings has been infrequently studied. A case study describes a potential lack of concordance between administrators and clinicians in one organization, suggesting that administrators may rate their organization as having more positive culture and climate than clinicians (Wolf et al. 2014). Given that this was explored in only one organization, statistical comparisons were not possible. In another study, over 60% of teams comprised of administrators and clinicians showed significant variability in concordance on administrator leadership ratings (Aarons et al. 2016). Additionally, lack of concordance between administrators and clinicians resulted in associated decrements in organizational climate and culture (Aarons et al. 2015). This emerging data suggests that perceptions of those in different positions within behavioral health settings, even within the same organization, may vary and that lack of agreement has implications for organizational functioning and implementation of innovations. Having an understanding of this phenomenon is critical given research that indicates the importance of leader and front-line employees concordance on effective work-unit functioning (Bashshur et al. 2011; Cole et al. 2013; Gibson et al. 2009).

The purpose of the current study is to examine the concordance between administrators’ (i.e., supervisors, clinical directors, and executive directors) and clinicians’ report on organizational culture and climate in a large public behavioral health system. There are three potential scenarios: (a) administrators perceive their organizational culture and climate to be more positive than clinicians, (b) administrators are in agreement with clinicians in their perceptions of organizational culture and climate, and (c) administrators perceive their organizational culture and climate to be more negative than clinicians (Fleenor et al. 1996; Yammarino and Atwater 1997). Based on the previous literature, we posit the following hypotheses:

Hypothesis 1

Generally, administrators will rate characteristics of their organizational culture and climate more positively compared to clinicians.

Hypothesis 2

Administrators and clinicians in smaller programs will provide more concordant reports of organizational culture and climate compared to administrators and clinicians in larger programs.

Method

Agencies and Participants

We purposively sampled from 29 child-serving public behavioral health organizations in the City of Philadelphia as part of a larger study investigating implementation of evidence based practices (Beidas et al. 2013). These 29 organizations were selected out of approximately 100 because they served the largest proportion of youth through their outpatient behavioral health programs. To be included in the overall sample of agencies that we sampled from, agencies could not be specialty clinics (e.g., autism clinic) and must have billed for providing outpatient services to youth in the year prior. Of these 29 organizations, 21 (72%)Footnote 3 agreed to participate, representing 28 outpatient programs (several organizations had more than 1 site). Approximately 58% of clinicians employed by the 28 programs participated in the study [more than half (57%) of programs had a 60% response rate or higher; few programs (<15%) had response rates <40%].Footnote 4 We invited all therapists who provided services to youth through the outpatient program to participate, regardless of whether or not they were implementing an evidence-based practice. The work group unit of interest referred to therapists employed within the outpatient program who served youth and families. In some agencies, there was a specific child outpatient program, whereas in others there was a general outpatient program within which youth and families were seen. The sample included 22 organizations representing 28 outpatient programs, 73 administrators (i.e., 22 supervisors, 29 clinical directors, and 22 executive directors), and 247 clinicians.

Procedure

This study was approved by the University of Pennsylvania and City of Philadelphia Institutional Review Boards. The person identified as the leader of the organization was approached to solicit his/her organization’s participation. A one-time 2-h meeting was scheduled with potential participants from the outpatient program, during which lunch was provided. When there was more than one program and/or site within an organization, separate meetings were held at the location of each program. During this meeting, the research team gave an overview of the study, obtained written informed consent, and collected a measure of organizational culture and climate. Clinicians completed the measure together in one location; all administrators completed the measure separately. All were assured of the confidentiality of their responses. All participants were compensated $50 for participation in the study.

Measures

Participant Demographics

Participants were designated as administrators (i.e., executive directors, clinical directors, and supervisors) or clinicians. This designation was based upon organizational structure reported by organizational leadership. Participants also completed a brief demographics questionnaire including age, race/ethnicity, educational background, and years of experience (Weisz 1997).

Organizational Demographics

Organization size was determined from the number of unique child clients seen in each outpatient program in 2014 (data provided by the City of Philadelphia Community Behavioral Health; Sarah Chen, PhD, MSW, personal communication, October 30th, 2014). Thus, we characterized size at the program level rather than the organizational level. In the case of organizations with multiple programs (i.e., providing outpatient services in multiple sites), we divided the number of clients by the number of sites where the programs were offered. We split the programs at the median (range = 186–2294 youth) to categorize them as small (serving less than 466 clients; k = 14) or large (serving 466 clients or more; k = 14). Consistent with previous studies (Aarons et al. 2009), each program (K = 28), rather than each organization (K = 22), was treated as a distinct unit because of largely different leadership structures, locations, and staff.

Organizational Social Context Measurement System (OSC) (Glisson et al. 2008a, b, 2012)

The OSC includes 105 items measuring six dimensions that were developed in multiple validity and reliability studies over three decades to assess organizational culture and organizational climate in mental health and social service organizations (Glisson et al. 2008a). The organizational culture and climate scales are profiled using T-scores, with a mean of 50 and a standard deviation of 10, established from a normative sample of 100 mental health organizations nationally (Glisson et al. 2008a).

Organizational culture is defined as the expectations that drive the way work is done in an organization and includes three primary dimensions: proficiency, rigidity, and resistance. Proficient cultures are those in which clinicians prioritize the well-being of clients and are expected to be competent. Proficiency includes two subscales, including seven items measuring responsiveness (e.g., “Members of my organizational unit are expected to be responsive to the needs of each client”) and eight items measuring competence (e.g., “Members of my organizational unit are expected to have up-to-date knowledge”). Rigid cultures are those in which clinicians have little autonomy. Rigidity includes two subscales, including seven items measuring centralization (e.g., “I have to ask a supervisor or coordinator before I do almost anything”) and seven items measuring formalization (e.g., “The same steps must be followed in processing every piece of work”). Resistant cultures are those in which clinicians are expected to show little interest in new ways to provide services and suppress efforts for change. Resistance includes two subscales, including six items measuring apathy (e.g., “Members of my organizational unit are expected to not make waves”) and seven items measuring suppression (e.g., “Members of my organizational unit are expected to be critical”). In our sample, all dimensions of organizational culture demonstrated acceptable internal consistency (α proficiency = 0.91; α rigidity = 0.70; α resistance = 0.91).

Organizational climate refers to when workers in the same organization share perceptions of the psychological impact of their work environment on their well-being and functioning in their organization (i.e., molar organizational climate). The OSC measures climate on three dimensions including engagement, functionality, and stress. Engaged climates refer to ones in which clinicians feel they can accomplish worthwhile things and remain invested. Engagement includes two subscales, including five items measuring personalization (e.g., “I feel I treat some of the clients I serve as impersonal objects”—reverse coded) and six items measuring personal accomplishment (e.g., “I have accomplished many worthwhile things in this job”). Functional climates are ones in which clinicians can get their job done effectively. Functionality includes three subscales, including five items measuring growth and achievement (e.g., “This agency provides numerous opportunities to advance if you work for it”), six items measuring role clarity (e.g., “My job responsibilities are clearly defined”) and four items measuring cooperation (e.g., “There is a feeling of cooperation among my coworkers”). Stress is indicated with three subscales, six items measuring emotional exhaustion (e.g., “I feel like I am at the end of my rope”), seven items measuring role conflict (e.g., “Interests of the clients are often replaced by bureaucratic concerns—e.g., paperwork”) and seven items measuring role overload (e.g., “The amount of work I have to do keeps me from doing a good job”). In our sample, all dimensions of organizational climate demonstrated acceptable internal consistency (α engagement = 0.78; α functionality = 0.91; α stress = 0.91).

Analytic Plan

Analyses were conducted by the OSC development team. Hierarchical linear models (HLM) were used to estimate the relationship between type of respondent (i.e., administrator versus clinician) and reports of organizational culture and climate (Hedeker and Gibbons 2006; Raudenbush and Bryk 2002). We compared clinicians to all administrators (i.e., supervisors, clinical directors, and executive directors), because of our sample size of leaders. The HLM models accounted for the nested structure of the data by incorporating two levels with clinicians (Level 1) nested within programs (Level 2). All models were estimated using HLM 6 software (Raudenbush 2004). Three sets of analyses were conducted. To address Hypothesis 1, we ran a set of HLM analyses to ascertain the effect of the predictor variable (respondent type) on each of the six dimensions of organizational culture and climate as the dependent variables (6 models). To address Hypothesis 2, we repeated these analyses to examine the impact of position type on the outcomes of interest stratified by program size (12 models—6 for small programs; 6 for large programs). Also as part of Hypothesis 2, we ran a set of analyses that included position type, program size, and their interaction term (i.e., position type × program size) to examine whether associations between position type and culture and climate differed significantly across strata of program size. Note that individual respondent responses were used for organizational culture and climate rather than the normed T-scores to allow for study of the question of interest. We did not adjust for multiple models given the exploratory nature of the analyses (Rothman 1990).

Results

Table 1 provides demographic information about the participants. Tables 2 and 3 present the results of the HLM analyses investigating concordance of administrators and clinicians report of the three components of organizational culture (proficiency, rigidity, resistance) and climate (functionality, engagement, stress) by program size (i.e., small, large).

Table 1 Administrator (N = 73) and clinician (N = 247) demographics
Table 2 Administrators and clinicians ratings of organizational culture
Table 3 Administrators and clinicians rating of organizational climate

Organizational Culture

Proficiency

The overall model was significant, suggesting that administrators rate proficiency 4.03 points higher (6.5%) than clinicians in general (p < .001). In large programs, administrators rated proficiency 6.32 points (10.3%) higher than clinicians (p < .001). No significant differences were observed between administrators and clinicians in small programs. An interaction was observed such that the mean difference between administrators and clinicians in small programs was significantly different when compared to the mean difference between administrators and clinicians in large programs (p = .04).

Rigidity

The overall model was significant, suggesting administrators rate rigidity 1.91 points (−4.4%) lower than clinicians in general (p = .01). In large programs, administrators rated rigidity 2.93 points (−6.8%) lower than clinicians (p = .01). No significant difference was observed between administrators and clinicians in small programs. The interaction was not significant.

Resistance

The overall model was not significant, suggesting that administrators do not rate resistance differently than clinicians. There was no significant difference between administrators and clinicians in either small or large programs.

Organizational Climate

Engagement

The overall model was not significant, suggesting that administrators do not rate engagement differently than clinicians. In large agencies, administrators rated engagement 2.48 points (5.6%) higher than clinicians (p = .01). No significant difference was observed between administrators and clinicians in small programs. The interaction was not significant.

Functionality

The overall model was significant, suggesting that administrators rate functionality 2.65 points (5.0%) higher than clinicians in general (p = .03). In large programs, administrators rate functionality 6.96 points (13.5%) higher than clinicians (p < .001). There was no significant difference within smaller programs. An interaction was observed such that the mean difference between administrators and clinicians in small programs was significantly different when compared to the mean difference between administrators and clinicians in large programs (p < .001).

Stress

The overall model was not significant, suggesting that administrators do not rate stress differently than clinicians. There were no significant differences between administrators and clinicians in small and large programs. However, an interaction was observed such that the mean difference between administrators and clinicians in small programs was significantly different when compared to the mean difference between administrators and clinicians in large programs (p = .04).

Discussion

This study examines the concordance between administrators’ and clinicians’ reports on organizational culture and climate in a large public behavioral health system. Findings were consistent with our hypotheses and with previous literature (e.g., Wolf et al. 2014): administrators had a tendency to rate organizational culture and climate more positively when compared to clinicians. Specifically, administrators reported their organizations to be more proficient, less rigid, and more functional as compared to clinicians, largely corroborating other literature demonstrating the lack of agreement between leader and front-line workers report of various organizational constructs (e.g., Aarons et al. 2015; Carlfjord et al. 2010; Hansen et al. 2011; Hasson et al. 2012). These findings shed light on the potential lack of concordance of administrator and clinician perceptions of organizational culture and climate which may have important implications for research design, the validity of inferences, and potential organizational interventions.

Findings indicated that administrators and clinicians in smaller programs provide more concordant reports of culture and climate when compared to administrators and clinicians in larger programs. Administrators in large programs rated proficiency more positively, a domain of organizational culture, as compared to clinicians; this phenomenon was not observed in small programs. Because the referent for culture items on the OSC is the shared work environment, the discrepancy between administrators and clinicians in large programs suggests administrators may not be accurate raters of organizational culture as experienced by clinicians. However, because administrators in smaller programs may work alongside clinicians, they may more accurately report the organizational culture as it is experienced by clinicians. By definition, organizational culture is a socially constructed and shared feature of the work environment; consequently, accuracy in rating culture is in the collective eyes of the beholders.

Similarly, administrators in large programs rated functionality and stress more positively, two domains of organizational climate, in their organizational unit compared to clinicians; this was not the case in small programs. Because the referent for climate items on the OSC is an individual’s personal experience (e.g., “I feel like I am at the end of my rope”), this finding suggests administrators and clinicians may have qualitatively different experiences in their work environment and that clinicians experience more stress and less functionality than those in upper management (Glisson et al. 2008a). This is important to attend to given the high burnout and turnover rates of clinicians in public behavioral health systems (Aarons and Sawitzky 2006a; Aarons et al. 2011; Beidas et al. 2016).

Structural distance, or the physical distance, perceived social distance, and perceived interaction frequency between leadership and front-line providers (Antonakis and Atwater 2002; Avolio et al. 2004), is one important program characteristic which may explain the pattern of observed results. It is likely that structural distance may be more prominent in large programs given that administrators may be physically located further away, are perceived as less similar to front-line workers, and interact less frequently with front-line workers (Beidas et al. 2014). This may negatively impact the ability of administrators to report on organizational culture and climate in a way that corroborates the experience of clinicians. Further enquiry explicitly measuring structural distance and its relationship to concordance of ratings of organizational culture and climate in administrators and clinicians is needed.

One implication of this study is the potential for increasing the concordance of administrators’ and clinicians’ culture and climate reports in large programs by changing the item referent for administrators. Rather than asking an administrator to describe her or his perception of the work environment, items might be re-worded to elicit their perceptions of clinicians’ experience of the work environment, consistent with a referent-shift consensus approach (i.e., referring to the group as the referent rather than the individual; Chan 1998). This is especially relevant for climate items where the referent is the person’s individual perception of the work environment (e.g., “I get the cooperation I need to do my job”). Anecdotally, this strategy has been used in consulting with mental health service organizations by asking administrators to describe how clinicians experience their cultures and climates and then comparing administrators’ responses to actual aggregate results from clinicians. Additional psychometric work is needed to assess the validity of inferences based on measures which ask administrators to rate their programs’ culture and climate as they believe clinicians would.

There are a number of limitations of this study. Organizational size is difficult to measure (Kimberly 1976) and we used number of clients served in each program as a proxy for size. However, we also explored the data using number of clinicians employed within the program as a proxy for organizational size and obtained consistent results. Data from a national sample of community-based mental health organizations that serve children (Schoenwald et al. 2008) suggests that the majority of community based organizations are part of larger entities that operate in multiple sites, which suggests that there may be a confound of leader, size, and program that could account for the lack of agreement in reports (e.g., leader may be in charge of multiple programs and may not be thinking of the particular program in mind as the referent). We did not include measures of structural distance. We also did not examine how discrepancies related to other outcomes such as staff turnover intentions or turnover. Future studies should examine whether discrepancies relate to important service and clinical outcomes. We compared clinicians to all administrators, because of our sample size of administrators, but there may be differences in supervisors, clinical directors, and executive directors given their managerial level (Aarons et al. 2014). The cross-sectional design precludes our ability to make causal inferences. We may have been underpowered, but compared to other studies in the literature, the number of programs represented was a relative strength (e.g., Glisson et al. 2016). Finally, not all organizations or clinicians employed in the programs participated. However, the response rates are quite high compared to the typical survey response rates in studies of this kind (i.e., 27%, Cunningham et al. 2015). Consistent with anecdotal feedback from the present study, lack of time and survey burden were critical factors, rather than some other systematic bias. Furthermore, a recent study examining low response rates in surveys failed to find bias in multivariate analysis that controlled for background variables (Rindfuss et al. 2015) thus somewhat mitigating these concerns.

Despite the limitations noted, the findings provide important information for future inquiry into the measurement of organizational culture and climate in public behavioral health settings and beyond. One particularly important next step includes research regarding the effect of lack of concordance between administrators’ and clinicians’ report on organizational culture and climate on outcomes (Hasson et al. 2012). The organizational literature suggests the potential harmful sequelae of disagreement on these constructs between leaders and front-line employees, including poor individual- and organizational-level outcomes (Yammarino and Atwater 1997). Specifically, concordance between leadership and front-line employees is an important indicator of team effectiveness, suggesting the potential influence on work unit functioning (Bashshur et al. 2011; Cole et al. 2013; Gibson et al. 2009). Further, greater discrepancies in administrator and clinician reports of the administrator’s behavior has been found to be associated with more negative organizational culture (Aarons et al. 2015). Additionally, research has found more positive organizational climate for implementation when administrators rate themselves lower on implementation leadership relative to clinician ratings (Aarons et al. 2016). Importantly, this type of disagreement can be harnessed to provide intervention for administrators, by providing feedback on the discrepancies between their perspectives and their front-line workers (Van Velsor et al. 1993), thus potentially improving individual and organizational level outcomes.