Abstract
Organizational culture and climate are important determinants of behavioral health service delivery for youth. The Organizational Social Context measure is a well validated assessment of organizational culture and climate that has been developed and extensively used in public sector behavioral health service settings. The degree of concordance between administrators and clinicians in their reports of organizational culture and climate may have implications for research design, inferences, and organizational intervention. However, the extent to which administrators’ and clinicians’ reports demonstrate concordance is just beginning to garner attention in public behavioral health settings in the United States. We investigated the concordance between 73 administrators (i.e., supervisors, clinical directors, and executive directors) and 247 clinicians in 28 child-serving programs in a public behavioral health system. Findings suggest that administrators, compared to clinicians, reported more positive cultures and climates. Organizational size moderated this relationship such that administrators in small programs (<466 youth clients served annually) provided more congruent reports of culture and climate in contrast to administrators in large programs (≥466 youth clients served annually) who reported more positive cultures and climates than clinicians. We propose a research agenda that examines the effect of concordance between administrators and clinicians on organizational outcomes in public behavioral health service settings.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Research suggests the importance of organizational characteristics as determinants of behavioral health service delivery for youth in the community (e.g., Aarons and Sawitzky 2006a, b; Glisson 2002; Glisson et al. 2010; Hoagwood et al. 2001; Rogers 2003). Among organizational factors, organizational culture and climate have been found to be particularly important. Although definitions of organizational culture and climate are variable (Verbeke et al. 1998), organizational culture can be defined as shared employee perceptions of norms, values, behavioral expectations and assumptions that guide employee behavior (Cooke and Rousseau 1988) whereas organizational climate refers to shared employee perceptions regarding the effect of the work environment on employees’ personal well-being (i.e., molar organizational climate; James et al. 1978)Footnote 1 and specific strategic or procedural outcomes (i.e., strategic climate; Ehrhart et al. 2014; Schneider et al. 2013). In children’s service systems, several domains of organizational culture and organizational climate have been associated with a host of important outcomes including clinician turnover (Aarons and Sawitzky 2006a; Glisson 2002; Glisson et al. 2008b), service quality (Glisson and Hemmelgarn 1998; Olin et al. 2014), youth behavioral health outcomes (Glisson and Green 2011; Glisson and Hemmelgarn 1998; Williams and Glisson 2014), clinician attitudes toward adopting evidence-based practices (Aarons and Sawitzky 2006b), self-reported implementation of evidence-based strategies (Beidas et al. 2015), and sustainment of new practices (Glisson et al. 2008b). The most commonly used measure of organizational culture and organizational climate in public children’s behavioral health and child welfare systems is the Organizational Social Context (OSC) measure (Glisson et al. 2008a, b, 2012).
Several decades of research on organizational culture and climate has produced important advances in the quantitative measurement of these constructs (Klein and Kozlowski 2000; Zyphur et al. 2016). One point of consensus includes recognition that organizational culture and climate are socially constructed, shared characteristics of the work environment (Ostroff et al. 2003; Verbeke et al. 1998). This suggests the importance of members of an organizational unit (e.g., organization, program, team) being in agreement with respect to their experience of culture and climate (Klein and Kozlowski 2000) and that valid and reliable inferences about an organization’s culture and climate require confidence in the extent to which observed scores reflect a shared reality amongst employees.
Important differences have emerged in the quantitative strategies used to measure organizational culture and climate which have important implications for research design, the validity of inferences, and potential organizational interventions. Guidelines for some measures instruct users to survey front-line employees from the targeted work unit(s) (i.e., organization). Mid-level managers and upper leadership are typically excluded from completing measures of culture and climate because it is believed that the experiences of front-line service providers are most germane for shaping service delivery processes. In contrast, guidelines for other culture and climate measures instruct users to sample only higher-level managers, administrators, or other leaders who are conceptualized as knowledgeable key informants (e.g., Cameron and Quinn 2011). This approach sometimes incorporates other key informants along with leaders into a consensus building process for developing culture and climate ratings. In this approach, key informants are construed as accurate raters of unit-level constructs. A third hybrid approach samples both administrators and clinicians and combines ratings of culture and climate from these two groups (e.g., Beidas et al. 2015). This approach relies on the assumption that administrators and clinicians provide equally useful and concordant information regarding an organization’s culture and climate.
Questions about whether leadership and front-line employees share perceptions of organizational culture and climate have emerged from this third approach. A number of terms have been used to describe this concept, including concordance,Footnote 2 agreement, discrepancy, and perceptual distance (see Gibson et al. 2009; Hasson et al. 2016). Empirical work in the broader organizational literature indicates there is a lack of concordance between leader and front-line employee ratings of organizational culture (Martin et al. 2006; Zyphur et al. 2016) in hospitals (Hansen et al. 2011; Hartmann et al. 2008; Rosen et al. 2010; Singer et al. 2009) and primary care clinics (Carlfjord et al. 2010). Importantly, emerging research suggests that investigation of concordance between leadership and front-line employee reports of organizational constructs is critical given that degree of concordance may influence important organizational outcomes. For example, greater leadership and employee concordance prior to an organizational intervention resulted in better organizational outcomes (Hasson et al. 2016). Given evidence from a meta-analysis that found that organizational culture does not exert the same effect in all industries (Hartnell et al. 2011), it is important to explore questions of concordance in specific industries.
To date, the concordance between administrator and clinician reports of organizational culture and climate in public behavioral health settings has been infrequently studied. A case study describes a potential lack of concordance between administrators and clinicians in one organization, suggesting that administrators may rate their organization as having more positive culture and climate than clinicians (Wolf et al. 2014). Given that this was explored in only one organization, statistical comparisons were not possible. In another study, over 60% of teams comprised of administrators and clinicians showed significant variability in concordance on administrator leadership ratings (Aarons et al. 2016). Additionally, lack of concordance between administrators and clinicians resulted in associated decrements in organizational climate and culture (Aarons et al. 2015). This emerging data suggests that perceptions of those in different positions within behavioral health settings, even within the same organization, may vary and that lack of agreement has implications for organizational functioning and implementation of innovations. Having an understanding of this phenomenon is critical given research that indicates the importance of leader and front-line employees concordance on effective work-unit functioning (Bashshur et al. 2011; Cole et al. 2013; Gibson et al. 2009).
The purpose of the current study is to examine the concordance between administrators’ (i.e., supervisors, clinical directors, and executive directors) and clinicians’ report on organizational culture and climate in a large public behavioral health system. There are three potential scenarios: (a) administrators perceive their organizational culture and climate to be more positive than clinicians, (b) administrators are in agreement with clinicians in their perceptions of organizational culture and climate, and (c) administrators perceive their organizational culture and climate to be more negative than clinicians (Fleenor et al. 1996; Yammarino and Atwater 1997). Based on the previous literature, we posit the following hypotheses:
Hypothesis 1
Generally, administrators will rate characteristics of their organizational culture and climate more positively compared to clinicians.
Hypothesis 2
Administrators and clinicians in smaller programs will provide more concordant reports of organizational culture and climate compared to administrators and clinicians in larger programs.
Method
Agencies and Participants
We purposively sampled from 29 child-serving public behavioral health organizations in the City of Philadelphia as part of a larger study investigating implementation of evidence based practices (Beidas et al. 2013). These 29 organizations were selected out of approximately 100 because they served the largest proportion of youth through their outpatient behavioral health programs. To be included in the overall sample of agencies that we sampled from, agencies could not be specialty clinics (e.g., autism clinic) and must have billed for providing outpatient services to youth in the year prior. Of these 29 organizations, 21 (72%)Footnote 3 agreed to participate, representing 28 outpatient programs (several organizations had more than 1 site). Approximately 58% of clinicians employed by the 28 programs participated in the study [more than half (57%) of programs had a 60% response rate or higher; few programs (<15%) had response rates <40%].Footnote 4 We invited all therapists who provided services to youth through the outpatient program to participate, regardless of whether or not they were implementing an evidence-based practice. The work group unit of interest referred to therapists employed within the outpatient program who served youth and families. In some agencies, there was a specific child outpatient program, whereas in others there was a general outpatient program within which youth and families were seen. The sample included 22 organizations representing 28 outpatient programs, 73 administrators (i.e., 22 supervisors, 29 clinical directors, and 22 executive directors), and 247 clinicians.
Procedure
This study was approved by the University of Pennsylvania and City of Philadelphia Institutional Review Boards. The person identified as the leader of the organization was approached to solicit his/her organization’s participation. A one-time 2-h meeting was scheduled with potential participants from the outpatient program, during which lunch was provided. When there was more than one program and/or site within an organization, separate meetings were held at the location of each program. During this meeting, the research team gave an overview of the study, obtained written informed consent, and collected a measure of organizational culture and climate. Clinicians completed the measure together in one location; all administrators completed the measure separately. All were assured of the confidentiality of their responses. All participants were compensated $50 for participation in the study.
Measures
Participant Demographics
Participants were designated as administrators (i.e., executive directors, clinical directors, and supervisors) or clinicians. This designation was based upon organizational structure reported by organizational leadership. Participants also completed a brief demographics questionnaire including age, race/ethnicity, educational background, and years of experience (Weisz 1997).
Organizational Demographics
Organization size was determined from the number of unique child clients seen in each outpatient program in 2014 (data provided by the City of Philadelphia Community Behavioral Health; Sarah Chen, PhD, MSW, personal communication, October 30th, 2014). Thus, we characterized size at the program level rather than the organizational level. In the case of organizations with multiple programs (i.e., providing outpatient services in multiple sites), we divided the number of clients by the number of sites where the programs were offered. We split the programs at the median (range = 186–2294 youth) to categorize them as small (serving less than 466 clients; k = 14) or large (serving 466 clients or more; k = 14). Consistent with previous studies (Aarons et al. 2009), each program (K = 28), rather than each organization (K = 22), was treated as a distinct unit because of largely different leadership structures, locations, and staff.
Organizational Social Context Measurement System (OSC) (Glisson et al. 2008a, b, 2012)
The OSC includes 105 items measuring six dimensions that were developed in multiple validity and reliability studies over three decades to assess organizational culture and organizational climate in mental health and social service organizations (Glisson et al. 2008a). The organizational culture and climate scales are profiled using T-scores, with a mean of 50 and a standard deviation of 10, established from a normative sample of 100 mental health organizations nationally (Glisson et al. 2008a).
Organizational culture is defined as the expectations that drive the way work is done in an organization and includes three primary dimensions: proficiency, rigidity, and resistance. Proficient cultures are those in which clinicians prioritize the well-being of clients and are expected to be competent. Proficiency includes two subscales, including seven items measuring responsiveness (e.g., “Members of my organizational unit are expected to be responsive to the needs of each client”) and eight items measuring competence (e.g., “Members of my organizational unit are expected to have up-to-date knowledge”). Rigid cultures are those in which clinicians have little autonomy. Rigidity includes two subscales, including seven items measuring centralization (e.g., “I have to ask a supervisor or coordinator before I do almost anything”) and seven items measuring formalization (e.g., “The same steps must be followed in processing every piece of work”). Resistant cultures are those in which clinicians are expected to show little interest in new ways to provide services and suppress efforts for change. Resistance includes two subscales, including six items measuring apathy (e.g., “Members of my organizational unit are expected to not make waves”) and seven items measuring suppression (e.g., “Members of my organizational unit are expected to be critical”). In our sample, all dimensions of organizational culture demonstrated acceptable internal consistency (α proficiency = 0.91; α rigidity = 0.70; α resistance = 0.91).
Organizational climate refers to when workers in the same organization share perceptions of the psychological impact of their work environment on their well-being and functioning in their organization (i.e., molar organizational climate). The OSC measures climate on three dimensions including engagement, functionality, and stress. Engaged climates refer to ones in which clinicians feel they can accomplish worthwhile things and remain invested. Engagement includes two subscales, including five items measuring personalization (e.g., “I feel I treat some of the clients I serve as impersonal objects”—reverse coded) and six items measuring personal accomplishment (e.g., “I have accomplished many worthwhile things in this job”). Functional climates are ones in which clinicians can get their job done effectively. Functionality includes three subscales, including five items measuring growth and achievement (e.g., “This agency provides numerous opportunities to advance if you work for it”), six items measuring role clarity (e.g., “My job responsibilities are clearly defined”) and four items measuring cooperation (e.g., “There is a feeling of cooperation among my coworkers”). Stress is indicated with three subscales, six items measuring emotional exhaustion (e.g., “I feel like I am at the end of my rope”), seven items measuring role conflict (e.g., “Interests of the clients are often replaced by bureaucratic concerns—e.g., paperwork”) and seven items measuring role overload (e.g., “The amount of work I have to do keeps me from doing a good job”). In our sample, all dimensions of organizational climate demonstrated acceptable internal consistency (α engagement = 0.78; α functionality = 0.91; α stress = 0.91).
Analytic Plan
Analyses were conducted by the OSC development team. Hierarchical linear models (HLM) were used to estimate the relationship between type of respondent (i.e., administrator versus clinician) and reports of organizational culture and climate (Hedeker and Gibbons 2006; Raudenbush and Bryk 2002). We compared clinicians to all administrators (i.e., supervisors, clinical directors, and executive directors), because of our sample size of leaders. The HLM models accounted for the nested structure of the data by incorporating two levels with clinicians (Level 1) nested within programs (Level 2). All models were estimated using HLM 6 software (Raudenbush 2004). Three sets of analyses were conducted. To address Hypothesis 1, we ran a set of HLM analyses to ascertain the effect of the predictor variable (respondent type) on each of the six dimensions of organizational culture and climate as the dependent variables (6 models). To address Hypothesis 2, we repeated these analyses to examine the impact of position type on the outcomes of interest stratified by program size (12 models—6 for small programs; 6 for large programs). Also as part of Hypothesis 2, we ran a set of analyses that included position type, program size, and their interaction term (i.e., position type × program size) to examine whether associations between position type and culture and climate differed significantly across strata of program size. Note that individual respondent responses were used for organizational culture and climate rather than the normed T-scores to allow for study of the question of interest. We did not adjust for multiple models given the exploratory nature of the analyses (Rothman 1990).
Results
Table 1 provides demographic information about the participants. Tables 2 and 3 present the results of the HLM analyses investigating concordance of administrators and clinicians report of the three components of organizational culture (proficiency, rigidity, resistance) and climate (functionality, engagement, stress) by program size (i.e., small, large).
Organizational Culture
Proficiency
The overall model was significant, suggesting that administrators rate proficiency 4.03 points higher (6.5%) than clinicians in general (p < .001). In large programs, administrators rated proficiency 6.32 points (10.3%) higher than clinicians (p < .001). No significant differences were observed between administrators and clinicians in small programs. An interaction was observed such that the mean difference between administrators and clinicians in small programs was significantly different when compared to the mean difference between administrators and clinicians in large programs (p = .04).
Rigidity
The overall model was significant, suggesting administrators rate rigidity 1.91 points (−4.4%) lower than clinicians in general (p = .01). In large programs, administrators rated rigidity 2.93 points (−6.8%) lower than clinicians (p = .01). No significant difference was observed between administrators and clinicians in small programs. The interaction was not significant.
Resistance
The overall model was not significant, suggesting that administrators do not rate resistance differently than clinicians. There was no significant difference between administrators and clinicians in either small or large programs.
Organizational Climate
Engagement
The overall model was not significant, suggesting that administrators do not rate engagement differently than clinicians. In large agencies, administrators rated engagement 2.48 points (5.6%) higher than clinicians (p = .01). No significant difference was observed between administrators and clinicians in small programs. The interaction was not significant.
Functionality
The overall model was significant, suggesting that administrators rate functionality 2.65 points (5.0%) higher than clinicians in general (p = .03). In large programs, administrators rate functionality 6.96 points (13.5%) higher than clinicians (p < .001). There was no significant difference within smaller programs. An interaction was observed such that the mean difference between administrators and clinicians in small programs was significantly different when compared to the mean difference between administrators and clinicians in large programs (p < .001).
Stress
The overall model was not significant, suggesting that administrators do not rate stress differently than clinicians. There were no significant differences between administrators and clinicians in small and large programs. However, an interaction was observed such that the mean difference between administrators and clinicians in small programs was significantly different when compared to the mean difference between administrators and clinicians in large programs (p = .04).
Discussion
This study examines the concordance between administrators’ and clinicians’ reports on organizational culture and climate in a large public behavioral health system. Findings were consistent with our hypotheses and with previous literature (e.g., Wolf et al. 2014): administrators had a tendency to rate organizational culture and climate more positively when compared to clinicians. Specifically, administrators reported their organizations to be more proficient, less rigid, and more functional as compared to clinicians, largely corroborating other literature demonstrating the lack of agreement between leader and front-line workers report of various organizational constructs (e.g., Aarons et al. 2015; Carlfjord et al. 2010; Hansen et al. 2011; Hasson et al. 2012). These findings shed light on the potential lack of concordance of administrator and clinician perceptions of organizational culture and climate which may have important implications for research design, the validity of inferences, and potential organizational interventions.
Findings indicated that administrators and clinicians in smaller programs provide more concordant reports of culture and climate when compared to administrators and clinicians in larger programs. Administrators in large programs rated proficiency more positively, a domain of organizational culture, as compared to clinicians; this phenomenon was not observed in small programs. Because the referent for culture items on the OSC is the shared work environment, the discrepancy between administrators and clinicians in large programs suggests administrators may not be accurate raters of organizational culture as experienced by clinicians. However, because administrators in smaller programs may work alongside clinicians, they may more accurately report the organizational culture as it is experienced by clinicians. By definition, organizational culture is a socially constructed and shared feature of the work environment; consequently, accuracy in rating culture is in the collective eyes of the beholders.
Similarly, administrators in large programs rated functionality and stress more positively, two domains of organizational climate, in their organizational unit compared to clinicians; this was not the case in small programs. Because the referent for climate items on the OSC is an individual’s personal experience (e.g., “I feel like I am at the end of my rope”), this finding suggests administrators and clinicians may have qualitatively different experiences in their work environment and that clinicians experience more stress and less functionality than those in upper management (Glisson et al. 2008a). This is important to attend to given the high burnout and turnover rates of clinicians in public behavioral health systems (Aarons and Sawitzky 2006a; Aarons et al. 2011; Beidas et al. 2016).
Structural distance, or the physical distance, perceived social distance, and perceived interaction frequency between leadership and front-line providers (Antonakis and Atwater 2002; Avolio et al. 2004), is one important program characteristic which may explain the pattern of observed results. It is likely that structural distance may be more prominent in large programs given that administrators may be physically located further away, are perceived as less similar to front-line workers, and interact less frequently with front-line workers (Beidas et al. 2014). This may negatively impact the ability of administrators to report on organizational culture and climate in a way that corroborates the experience of clinicians. Further enquiry explicitly measuring structural distance and its relationship to concordance of ratings of organizational culture and climate in administrators and clinicians is needed.
One implication of this study is the potential for increasing the concordance of administrators’ and clinicians’ culture and climate reports in large programs by changing the item referent for administrators. Rather than asking an administrator to describe her or his perception of the work environment, items might be re-worded to elicit their perceptions of clinicians’ experience of the work environment, consistent with a referent-shift consensus approach (i.e., referring to the group as the referent rather than the individual; Chan 1998). This is especially relevant for climate items where the referent is the person’s individual perception of the work environment (e.g., “I get the cooperation I need to do my job”). Anecdotally, this strategy has been used in consulting with mental health service organizations by asking administrators to describe how clinicians experience their cultures and climates and then comparing administrators’ responses to actual aggregate results from clinicians. Additional psychometric work is needed to assess the validity of inferences based on measures which ask administrators to rate their programs’ culture and climate as they believe clinicians would.
There are a number of limitations of this study. Organizational size is difficult to measure (Kimberly 1976) and we used number of clients served in each program as a proxy for size. However, we also explored the data using number of clinicians employed within the program as a proxy for organizational size and obtained consistent results. Data from a national sample of community-based mental health organizations that serve children (Schoenwald et al. 2008) suggests that the majority of community based organizations are part of larger entities that operate in multiple sites, which suggests that there may be a confound of leader, size, and program that could account for the lack of agreement in reports (e.g., leader may be in charge of multiple programs and may not be thinking of the particular program in mind as the referent). We did not include measures of structural distance. We also did not examine how discrepancies related to other outcomes such as staff turnover intentions or turnover. Future studies should examine whether discrepancies relate to important service and clinical outcomes. We compared clinicians to all administrators, because of our sample size of administrators, but there may be differences in supervisors, clinical directors, and executive directors given their managerial level (Aarons et al. 2014). The cross-sectional design precludes our ability to make causal inferences. We may have been underpowered, but compared to other studies in the literature, the number of programs represented was a relative strength (e.g., Glisson et al. 2016). Finally, not all organizations or clinicians employed in the programs participated. However, the response rates are quite high compared to the typical survey response rates in studies of this kind (i.e., 27%, Cunningham et al. 2015). Consistent with anecdotal feedback from the present study, lack of time and survey burden were critical factors, rather than some other systematic bias. Furthermore, a recent study examining low response rates in surveys failed to find bias in multivariate analysis that controlled for background variables (Rindfuss et al. 2015) thus somewhat mitigating these concerns.
Despite the limitations noted, the findings provide important information for future inquiry into the measurement of organizational culture and climate in public behavioral health settings and beyond. One particularly important next step includes research regarding the effect of lack of concordance between administrators’ and clinicians’ report on organizational culture and climate on outcomes (Hasson et al. 2012). The organizational literature suggests the potential harmful sequelae of disagreement on these constructs between leaders and front-line employees, including poor individual- and organizational-level outcomes (Yammarino and Atwater 1997). Specifically, concordance between leadership and front-line employees is an important indicator of team effectiveness, suggesting the potential influence on work unit functioning (Bashshur et al. 2011; Cole et al. 2013; Gibson et al. 2009). Further, greater discrepancies in administrator and clinician reports of the administrator’s behavior has been found to be associated with more negative organizational culture (Aarons et al. 2015). Additionally, research has found more positive organizational climate for implementation when administrators rate themselves lower on implementation leadership relative to clinician ratings (Aarons et al. 2016). Importantly, this type of disagreement can be harnessed to provide intervention for administrators, by providing feedback on the discrepancies between their perspectives and their front-line workers (Van Velsor et al. 1993), thus potentially improving individual and organizational level outcomes.
Notes
In this manuscript, we focus on molar organizational climate.
We use the term concordance going forward to reflect this concept.
The final sample included 22 organizations because one organization was included from a previous wave of data collection but was no longer included in the top 29 agencies.
Participation rates reflect best estimates of program staffing at the time of data collection. Total staff for each program were based on reports by clinical and/or executive directors, as the city of Philadelphia does not routinely collect data on staffing within organizations and programs. In 4 programs, clinical directors reported fewer therapists in their agency than participated in the study; in these instances, we rounded participation rates down to 100%.
References
Aarons, G. A., Ehrhart, M. G., Farahnak, L. R., & Sklar, M. (2014). Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annual Review of Public Health, 35, 255–274. doi:10.1146/annurev-publhealth-032013-182447.
Aarons, G. A., Ehrhart, M. G., Farahnak, L. R., Sklar, M., & Horowitz, J. (2015). Discrepancies in leader and follower ratings of transformational leadership: Relationship with organizational culture in mental health. Administration and Policy in Mental Health. doi:10.1007/s10488-015-0672-7.
Aarons, G. A., Ehrhart, M. G., Torres, E. L., Finn, N. K., & Beidas, R. S. (2016). The humble leader: Association of discrepancies in leader and follower ratings of implementation leadership with organizational climate in mental health organizations. Psychiatric Services. doi:10.1176/appi.ps.201600062.
Aarons, G. A., & Sawitzky, A. C. (2006a). Organizational climate partially mediates the effect of culture on work attitudes and staff turnover in mental health services. Administration and Policy in Mental Health, 33(3), 289–301. doi:10.1007/s10488-006-0039-1.
Aarons, G. A., & Sawitzky, A. C. (2006b). Organizational culture and climate and mental health provider attitudes toward evidence-based practice. Psychological Services, 3(1), 61–72. doi:10.1037/1541-1559.3.1.61.
Aarons, G. A., Sommerfeld, D. H., Hecht, D. B., Silovsky, J. F., & Chaffin, M. J. (2009). The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: Evidence for a protective effect. Journal of Consulting and Clinical Psychology, 77(2), 270. doi:10.1037/a0013223.
Aarons, G. A., Sommerfeld, D. H., & Willging, C. E. (2011). The soft underbelly of system change: The role of leadership and organizational climate in turnover during statewide behavioral health reform. Psychological Services, 8(4), 269–281.
Antonakis, J., & Atwater, L. (2002). Leader distance: A review and a proposed theory. Leadership Quarterly, 13(6), 673–704. doi:10.1016/S1048-9843(02)00155-8.
Avolio, B. J., Zhu, W. C., Koh, W., & Bhatia, P. (2004). Transformational leadership and organizational commitment: Mediating role of psychological empowerment and moderating role of structural distance. Journal of Organizational Behavior, 25(8), 951–968. doi:10.1002/job.283.
Bashshur, M. R., Hernández, A., & González-Romá, V. (2011). When managers and their teams disagree: A longitudinal look at the consequences of differences in perceptions of organizational support. Journal of Applied Psychology, 96(3), 558. doi:10.1037/a0022675.
Beidas, R. S., Aarons, G. A., Barg, F. K., Evans, A., Hadley, T., Hoagwood, K. E., … Mandell, D. S. (2013). Policy to implementation: Evidence-based practice in community mental health–study protocol. Implementation Science, 8(1), 1. doi:10.1186/1748-5908-8-38.
Beidas, R. S., Marcus, S., Aarons, G. A., Hoagwood, K. E., Schoenwald, S. K., Evans, A. C., … Mandell, D. S. (2015). Predictors of community therapists’ use of therapy techniques in a large public mental health system. JAMA Pediatrics, 169(4), 374–382. doi:10.1001/jamapediatrics.2014.3736.
Beidas, R. S., Marcus, S., Wolk, C. B., Powell, B., Aarons, G. A., Evans, A. C., … Mandell, D. S. (2016). A prospective examination of clinician and supervisor turnover within the context of implementation of evidence-based practices in a publicly-funded mental health system. Administration and Policy in Mental Health, 43(5), 640–649. doi:10.1007/s10488-015-0673-6.
Beidas, R. S., Wolk, C. L., Walsh, L. M., Evans, A. C. Jr., Hurford, M. O., & Barg, F. K. (2014). A complementary marriage of perspectives: Understanding organizational social context using mixed methods. Implementation Science, 9, 175. doi:10.1186/s13012-014-0175-z.
Cameron, K. S., & Quinn, R. E. (2011). Diagnosing and changing organizational culture: Based on the competing values framework (3 edn.). San Francisco, CA: Jossey-Bass.
Carlfjord, S., Andersson, A., Nilsen, P., Bendtsen, P., & Lindberg, M. (2010). The importance of organizational climate and implementation strategy at the introduction of a new working tool in primary health care. Journal of Evaluation in Clinical Practice, 16(6), 1326–1332. doi:10.1111/j.1365-2753.2009.01336.x.
Chan, D. (1998). Functional relations among constructs in the same content domain at different levels of analysis: A typology of composition models. Journal of Applied Psychology, 83(2), 234–246. doi:10.1037/0021-9010.83.2.234.
Cole, M. S., Carter, M. Z., & Zhang, Z. (2013). Leader-team congruence in power distance values and team effectiveness: The mediating role of procedural justice climate. Journal of Applied Psychology, 98(6), 962. doi:10.1037/a0034269.
Cooke, R. A., & Rousseau, D. M. (1988). Behavioral norms and expectations: A quantitative approach to the assessment of organizational culture. Group & Organization Studies, 13(3), 245–273. doi:10.1177/105960118801300302.
Cunningham, C. T., Quan, H., Hemmelgarn, B., Noseworthy, T., Beck, C. A., Dixon, E., … Jetté, N. (2015). Exploring physician specialist response rates to web-based surveys. BMC Medical Research Methodology, 15(1), 1. doi:10.1186/s12874-015-0016-z.
Ehrhart, M., Schneider, B., & Macey, W. H. (2014). Organizational climate and culture: An introduction to theory, research, and practice. New York, NY: Routledge.
Fleenor, J. W., McCauley, C. D., & Brutus, S. (1996). Self-other rating agreement and leader effectiveness. Leadership Quarterly, 7(4), 487–506. doi:10.1016/S1048-9843(96)90003-X.
Gibson, C. B., Cooper, C. D., & Conger, J. A. (2009). Do you see what we see? The complex effects of perceptual distance between leaders and teams. Journal of Applied Psychology, 94(1), 62. doi:10.1037/a0013073.
Glisson, C. (2002). The organizational context of children’s mental health services. Clinical Child and Family Psychology Review, 5(4), 233–253. doi:10.1023/A:1020972906177.
Glisson, C., & Green, P. (2011). Organizational climate, services, and outcomes in child welfare systems. Child Abuse & Neglect, 35(8), 582–591. doi:10.1016/j.chiabu.2011.04.009.
Glisson, C., Green, P., & Williams, N. J. (2012). Assessing the Organizational Social Context (OSC) of child welfare systems: Implications for research and practice. Child Abuse & Neglect, 36(9), 621–632. doi:10.1016/j.chiabu.2012.06.002.
Glisson, C., & Hemmelgarn, A. (1998). The effects of organizational climate and interorganizational coordination on the quality and outcomes of children’s service systems. Child Abuse & Neglect, 22(5), 401–421. doi:10.1016/S0145-2134(98)00005-2.
Glisson, C., Landsverk, J., Schoenwald, S. K., Kelleher, K., Hoagwood, K. E., Mayberg, S., … Research Network on Youth Mental Health. (2008a). Assessing the Organizational Social Context (OSC) of mental health services: Implications for research and practice. Administration and Policy in Mental Health, 35(1–2), 98–113. doi:10.1007/s10488-007-0148-5.
Glisson, C., Schoenwald, S. K., Hemmelgarn, A., Green, P., Dukes, D., Armstrong, K. S., & Chapman, J. E. (2010). Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consulting and Clinical Psychology, 78(4), 537–550. doi:10.1037/a0019160.
Glisson, C., Schoenwald, S. K., Kelleher, K., Landsverk, J., Hoagwood, K. E., Mayberg, S., … Research Network on Youth Mental Health. (2008b). Therapist turnover and new program sustainability in mental health clinics as a function of organizational culture, climate, and service structure. Administration and Policy in Mental Health, 35(1–2), 124–133. doi:10.1007/s10488-007-0152-9.
Glisson, C., Williams, N. J., Hemmelgarn, A., Proctor, E., & Green, P. (2016). Aligning organizational priorities with ARC to improve youth mental health service outcomes. Journal of Consulting and Clinical Psychology, 84(8), 713–725. doi:10.1037/ccp0000107.
Hansen, L. O., Williams, M. V., & Singer, S. J. (2011). Perceptions of hospital safety climate and incidence of readmission. Health Services Research, 46(2), 596–616. doi:10.1111/j.1475-6773.2010.01204.x.
Hartmann, C. W., Rosen, A. K., Meterko, M., Shokeen, P., Zhao, S., Singer, S., … Gaba, D. M. (2008). An overview of patient safety climate in the VA. Health Services Research, 43(4), 1263–1284. doi:10.1111/j.1475-6773.2008.00839.x.
Hartnell, C. A., Ou, A. Y., & Kinicki, A. (2011). Organizational culture and organizational effectiveness: A meta-analytic investigation of the competing values framework’s theoretical suppositions. Journal of Applied Psychology, 96(4), 677–694. doi:10.1037/a0021987.
Hasson, H., Gilbert-Ouimet, M., Baril-Gingras, G., Brisson, C., Vezina, M., Bourbonnais, R., & Montreuil, S. (2012). Implementation of an organizational-level intervention on the psychosocial environment of work comparison of managers’ and employees’ views. Journal of Occupational and Environmental Medicine, 54(1), 85–91. doi:10.1097/JOM.0b013e31823ccb2f.
Hasson, H., von Thiele Schwarz, U., Nielsen, K., & Tafvelin, S. (2016). Are we all in the same boat? The role of perceptual distance in organizational health interventions. Stress and Health. doi:10.1002/smi.2703.
Hedeker, D., & Gibbons, R. D. (2006). Longitudinal data analysis (Vol. 451). Hoboken, NJ: Wiley.
Hoagwood, K. E., Burns, B. J., Kiser, L., Ringeisen, H., & Schoenwald, S. K. (2001). Evidence-based practice in child and adolescent mental health services. Psychiatric Services, 52(9), 1179–1189. doi:10.1176/appi.ps.52.9.1179.
James, L. R., Hater, J. J., Gent, M. J., & Bruni, J. R. (1978). Psychological climate: Implications from cognitive social-learning theory and interactional psychology. Personnel Psychology, 31(4), 783–813. doi:10.1111/j.1744-6570.1978.tb02124.x.
Kimberly, J. R. (1976). Organizational size and the structuralist perspective: A review, critique, and proposal. Administrative Science Quarterly, 21(4), 571–597. doi:10.2307/2391717.
Klein, K. J., & Kozlowski, S. W. (2000). Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions. San Francisco, CA: Jossey-Bass.
Martin, A. J., Jones, E. S., & Callan, V. J. (2006). Status differences in employee adjustment during organizational change. Journal of Managerial Psychology, 21(2), 145–162. doi:10.1108/02683940610650758.
Olin, S. S., Williams, N., Pollock, M., Armusewicz, K., Kutash, K., Glisson, C., & Hoagwood, K. E. (2014). Quality indicators for family support services and their relationship to organizational social context. Administration and Policy in Mental Health, 41(1), 43–54. doi:10.1007/s10488-013-0499-z.
Ostroff, C., Kinicki, A., & Tamkins, M. (2003). Organizational culture and climate (Vol. 12). Hoboken, NJ: Wiley.
Raudenbush, S. W. (2004). HLM 6: Hierarchical linear and nonlinear modeling. Lincolnwood, IL: Scientific Software International.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (Vol. 1). Thousand Oaks, CA: Sage.
Rindfuss, R. R., Choe, M. K., Tsuya, N. O., Bumpass, L. L., & Tamaki, E. (2015). Do low survey response rates bias results? Evidence from Japan. Demographic Research, 32, 797. doi:10.4054/DemRes.2015.32.26.
Rogers, E. M. (2003). Elements of diffusion. In Diffusion of innovations (5th edn., pp. 1–38). New York, NY: Free Press.
Rosen, A. K., Singer, S., Shibei, Z., Shokeen, P., Meterko, M., & Gaba, D. (2010). Hospital safety climate and safety outcomes: Is there a relationship in the VA? Medical Care Research and Review, 67(5), 590–608. doi:10.1177/1077558709356703.
Rothman, K. J. (1990). No adjustments are needed for multiple comparisons. Epidemiology (Cambridge, Mass), 1(1), 43–46.
Schneider, B., Ehrhart, M. G., & Macey, W. H. (2013). Organizational climate and culture. Annual Review of Psychology, 64, 361–388. doi:10.1146/annurev-psych-113011-143809.
Schoenwald, S. K., Chapman, J. E., Kelleher, K., Hoagwood, K. E., Landsverk, J., Stevens, J., … Research Network on Youth Mental Health. (2008). A survey of the infrastructure for children’s mental health services: Implications for the implementation of empirically supported treatments (ESTs). Administration and Policy in Mental Health, 35(1–2), 84–97. doi:10.1007/s10488-007-0147-6.
Singer, S., Lin, S., Falwell, A., Gaba, D., & Baker, L. (2009). Relationship of safety climate and safety performance in hospitals. Health Services Research, 44(2 Pt 1), 399–421. doi:10.1111/j.1475-6773.2008.00918.x.
Van Velsor, E., Taylor, S., & Leslie, J. (1993). An examination of the relationships among self-perception accuracy, self-awareness, gender, and leader effectiveness. Human Resource Management, 32(2–3), 249–264. doi:10.1002/hrm.3930320205.
Verbeke, W., Volgering, M., & Hessels, M. (1998). Exploring the conceptual expansion within the field of organizational behaviour: Organizational climate and organizational culture. Journal of Management Studies, 35(3), 303–329. doi:10.1111/1467-6486.00095.
Weisz, J. R. (1997). Therapist background questionnaire. Los Angeles: University of California.
Williams, N. J., & Glisson, C. (2014). Testing a theory of organizational culture, climate and youth outcomes in child welfare systems: A United States national study. Child Abuse & Neglect, 38(4), 757–767. doi:10.1016/j.chiabu.2013.09.003.
Wolf, D. A. P. S., Dulmus, C., Maguin, E., Keesler, J., & Powell, B. (2014). Organizational leaders’ and staff members’ appraisals of their work environment within a children’s social service system. Human Service Organizations Management Leadership & Governance, 38(3), 215–227. doi:10.1080/23303131.2014.884032.
Yammarino, F. J., & Atwater, L. E. (1997). Do managers see themselves as others see them? Implications of self-other rating agreement for human resources management. Organizational Dynamics, 25(4), 35–44. doi:10.1016/S0090-2616(97)90035-8.
Zyphur, M. J., Zammuto, R. F., & Zhang, Z. (2016). Multilevel latent polynomial regression for modeling (in)congruence across organizational groups: The case of organizational culture research. Organizational Research Methods, 19(1), 53–79. doi:10.1177/1094428115588570.
Acknowledgements
We are especially grateful for the support that the Department of Behavioral Health and Intellectual disAbility Services has provided for this project, and for the Evidence Based Practice and Innovation (EPIC) group. We thank Charles Glisson, PhD, and Sonja Schoenwald, PhD, for their comments on earlier versions of this manuscript.
Funding
Funding for this research project was supported by NIMH K23 MH099179 (Beidas).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Dr. Beidas receives royalties from Oxford University Press. Dr. Marcus has received grant support from Ortho-McNeil Janssen and Forest Research Institute and has served as a consultant to AstraZeneca and Alkermes. All other authors have no conflicts of interest to report.
Ethical Approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Rights and permissions
About this article
Cite this article
Beidas, R.S., Williams, N.J., Green, P.D. et al. Concordance Between Administrator and Clinician Ratings of Organizational Culture and Climate. Adm Policy Ment Health 45, 142–151 (2018). https://doi.org/10.1007/s10488-016-0776-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10488-016-0776-8