Introduction

In the last decade of behavioral healthcare practice and research, two issues affecting the lives of those with serious mental illnesses have been highlighted—co-occurring disorders and trauma-related issues. Much has been written about the importance of these issues1 3 and about developing strategies to address them.4 7 For the most part, however, program measures and measures of consumer’s perceptions of and satisfaction with their care were developed for single-faceted services, i.e., services for either mental health, substance abuse, or trauma but not integrating all issues. Underlying both issues is the importance of consumer involvement in all aspects of care.

The Women, Co-Occurring Disorders and Violence Study (WCDVS) is the first large-scale study of women with co-occurring disorders and histories of abuse using behavioral healthcare services. Using a quasi-experimental design, the study tested the effectiveness of a number of innovative services integrating trauma, mental health, and substance abuse issues compared to services as usual. A unique aspect of the WCDVS was extensive consumer involvement in the design and implementation of the study.

In developing the protocol for the study, the steering committee wanted to capture the innovative nature of the trauma-informed interventions that also integrate mental health and substance abuse issues and reflect the voice of the consumer/survivor/women in recovery. No instrument could be found that measured individuals’ satisfaction with services for co-occurring disorders and/or trauma related disorders, so the Consumer Perceptions of Care (CPC) was developed to do this and to be used as a program measure of the level of integration including the trauma sensitivity of the interventions and to capture the extent of consumer choice in the services they received.

Development of the Measure

During the first phase (1998–2000) of the WCDVS, the evaluation subcommittee developed the cross-site interview protocol to gather descriptive and outcome data on study participants. A workgroup was formed to reach consensus on questions to capture study participants’ views of the care they received during the second phase of the WCDVS (2001–2004). The workgroup was composed of staff and consultants of the federally funded coordinating center and representatives of the study sites. Members were researchers, clinicians, and C/S/R women (consumer/survivor/recovering women) and represented a range of disciplines, including psychology, social work, statistics, and anthropology. The workgroup also included women with lived experiences of violence, abuse, psychiatric and substance use disorder treatment.

A pool of potential questions was developed from three sources: (1) existing instruments assessing similar constructs, primarily the Mental Health Statistics Improvement Program Adult Consumer Survey (MHSIP), various revisions of the MHSIP, and the Wisconsin Consumers Assess Their Services scale, an instrument developed by a workgroup of consumers, providers, and researchers in Wisconsin and modified by the Madison, Wisconsin WCDVS study site; (2) workgroup members, particularly consumers, proposed items based on their past experience and expertise; and (3) information from study sites’ needs assessment activities such as focus groups with C/S/R women about barriers to and facilitators of appropriate care and a series of focus groups with consumers held by the coordinating center during the first round of annual site visits. The workgroup followed a key recommendation of the MHSIP workgroup by relying heavily on input from consumers in selecting and framing questions.

The workgroup chose items with face validity in two broad conceptual domains: service integration (across substance abuse, mental health, and trauma) and consumer choice in services. Items for the measure were chosen to represent these domains with as little apparent overlap as possible.

Measures of clients’ perceptions of services and providers

Efforts to assess consumers’ perceptions of mental health services have focused primarily on determining client satisfaction with services. Traditionally, client satisfaction measures have been used in mental health services as an index of quality of care. Mental health services researchers have used it as a predictor of outcomes and as a mediator and program or process measure. Client satisfaction is also examined as a desirable outcome, much as it is used within programs and agencies. Very few studies have been done in the substance abuse treatment area on client satisfaction and even less in the treatment of co-occurring disorders, trauma services, or the integration of these.

In mental health services, the Client Satisfaction Questionnaire (CSQ-8)8 , 9 or questionnaires tapping similar domains appear to be the most frequently used. Typically, the surveys ask how satisfied the consumer was with their services, if they would recommend this service to others, if they would return if needed, and how they would rate their therapist.10

The CSQ-8 was developed as a general scale for use in human service programs. Consensus of expert panels was used to generate and then narrow down the items, resulting in a basically one-factor scale with high internal consistency (coefficient alpha is 0.93) measuring satisfaction with services in human service programs.11 Others12 have used the CSQ in conjunction with other measures to broaden the domains tapped, e.g., interpersonal aspects of care, client involvement in treatment, medication, and treatment issues.

Macnee and McCabe13 determined that homeless patients have special health needs and concerns and inductively developed a measure of satisfaction with care specifically for this population.

Studies of the predictive utility of measures of client satisfaction suggest it has a limited relationship to traditional mental health outcomes such as psychiatric symptoms. Generally, client satisfaction is not found to be closely related to the outcome of psychiatric symptoms whether it is considered as a mediator or as an outcome measured at the same time as the psychiatric symptoms.10 , 14 16 It has been suggested that this lack of a relationship demonstrates that clients can differentiate between satisfaction with their care and improvement in their symptoms.11 , 17

Client satisfaction has been shown to be related to other mediators of outcomes, especially relationship with and quality of staff in mental health services.18 Calsyn et al.19 found that a positive helping alliance was the most important mediator of client satisfaction for persons with serious mental illnesses. One review of the literature on psychiatric communication skills suggests that improved skills are an important influence on patients’ satisfaction.20 Consumers of mental health services often rate interpersonal processes as the most important aspect of their treatment.21 , 22

Abused women understandably have difficulty trusting others, and it has been suggested that a critical step in their recovery is the development of a therapeutic alliance with their service provider.2 , 23 25 Harris1 states that survivors especially need intensive case management and a therapeutic relationship with a case manager.

Consumer satisfaction is often used as a combination program measure and outcome measure in evaluations of mental health services. Berghofer et al.18 found no differences in client satisfaction between inpatient and outpatient settings. Others, however, have found that innovative or experimental interventions have resulted in higher consumer satisfaction when compared to services as usual. For example, Boardman et al.26 found that individuals with serious mental illnesses treated in in-patient units with links to community care were more satisfied with their care than those treated in traditional acute ward. In a similar target population, clients receiving Assertive Community Treatment reported higher levels of satisfaction than those receiving Brokered Case Management.19 Similarly, Tsemberis et al.27 found that individuals with serious mental illnesses in supported housing settings express more satisfaction with their residence than those in more restrictive community settings.

Based on this review of the literature, little is known about the evaluation of services for co-occurring or trauma-related disorders for client satisfaction. The findings of Holcomb et al.28 that patients with diagnoses of personality disorder, depression, or PTSD tended to be more globally dissatisfied with treatment are suggestive, as these diagnoses are often associated with histories of abuse. Comtois et al.29 looked at individuals diagnosed with borderline personality disorder and found that high utilizers of services did not differ from low utilizers on measures of client satisfaction.

There is evidence that women have unique needs from substance abuse treatment,30 but there is a dearth of research in this area.31 One study of substance abuse treatment for women with children examined the impact of health and social services matched to client identified needs. They found that clients receiving more health and social services reported better outcomes, both in substance use and in satisfaction with services. Matching services to needs, however, was less salient than predicted.32

This study seeks to contribute to the understanding of the assessment of consumer perceptions of care by examining the psychometric properties of the CPC, an instrument designed for use with women in behavioral healthcare settings. It can be used as a program measure by comparing the perception of care of women receiving services designed to integrate trauma, mental, and substance abuse issues to women receiving services as usual.

While consumer satisfaction levels do not necessarily vary by race,33 they have been shown to be influenced by ethnic identity.34 , 35 However as behavioral services are provided to a multicultural population, it is important to verify that instruments designed for use with this population are reliable across race and ethnicity,36 so this is also examined here.

Methods

This analysis was conducted with data on 2,729 women who participated in the WCDVS, a nationwide longitudinal study.37 Interviewers were trained to administer the interviews in a standardized manner, and all studies were approved by Institutional Review Board human subjects’ reviews.

Participants

Participants were adult women who had experienced violence or abuse, had co-occurring mental health and substance use disorders, and were high utilizers of behavioral health services. Specifically, the criteria were that the women were at least 18 years of age and had to have one or more DSM IV Axis I or Axis II psychiatric diagnosis and one or more DSM IV substance-related diagnoses. At least one of the substance use or other psychiatric diagnoses had to be current, and they had to have had at least two previous episodes of mental health and/or substance abuse treatment. In addition, they all had histories of physical and/or sexual abuse as children and/or as adults.

Participants were recruited into this quasi-experimental study through nine study sites. Women who were recruited into the study at intervention treatment sites (n = 1,415) had access to integrated, comprehensive, gender-specific, and trauma-informed services, whereas women recruited at comparison sites (n = 1,314) had access to usual care services. At each site, the intervention included eight core services, such as resource coordination and crisis intervention; staff knowledgeable about trauma; holistic treatment of mental health, trauma, and substance use issues; and the involvement of consumers in service planning and provision. The development and description of the integrated interventions has been described in greater detail elsewhere.38 The intervention sites chose comparison agencies that served similar clients with care-as-usual services in the same or nearby communities.39 After careful analyses of the services offered at the intervention sites, it was determined that the key element was the integration of trauma, mental health, and substance abuse issues in the intervention.40

Participant demographics are presented in Table 1. Participants averaged just over 35 years of age (mean = 35.8, SD = 8.9), ranging from 18 to 76. The majority of participants were Caucasian (50.3%), and 493 individuals (18.1%) identified themselves to be Hispanic. The entire protocol including the CPC was translated into Spanish, and 54 interviews were conducted in Spanish, all of these were with people who identified their ethnicity as Hispanic. Of those reporting Hispanic ethnicity, none indicated their race to be Caucasian or African American, 80.9% reported membership in another racial group, 7.5% responded that they were multiracial, and 11.6% did not specify a racial group. A relatively even number of participants reported that they were married or living with a significant other (38.4%), widowed/divorced (31.6%), or never married (30.0%). Nearly half (46.8%) of participants reported less than a high school education. Participants were most commonly unemployed (45.7%), living at or below poverty level (72%), with the majority living in a residential substance abuse treatment program (52.6%). The overwhelming majority of participants reported ever having a child (86.7%), although fewer (59.2%) reported having custody of at least one of their children.

Table 1 Participant demographics

Measures

Consumer perceptions of care

The 26 items of the CPC measure are scored as a 4-point Likert scale ranging from strongly agree (1) to strongly disagree (4). Three missing data response options are also included for each item; not applicable (7), refused (8), or does not know (9). All items are positive in orientation, with lower scores reflecting greater agreement. The scale is reproduced in Appendix. To assist with the interpretability of the results, responses were reverse coded such that higher scores reflect greater agreement.

Working Alliance Inventory-short form (WAI-SF)

The WAI-SF is a 12-item measure with a 7-point, fully anchored scale designed to measure the quality of client–staff relationships.41 , 42 Although originally designed for research in psychotherapy, this measure has been employed successfully in studies of case managers and their clients in community treatment programs.43 Its predictive and construct validity have been established through the relationships of the WAI with treatment outcomes, including quality of life, mental health symptoms, medication compliance, and satisfaction with treatment. The internal consistency in the WAI-SF was excellent in this study, as evidenced by a high Cronbach’s alpha of 0.93.

Client Satisfaction Questionnaire (CSQ-8)

The CSQ-88 assesses clients’ general perceptions of the quality of treatment they received, the degree to which their needs were met, and whether they would return for treatment or refer others to the program. It contains eight items that are rated on a 4-point Likert scale. Examples of items are, “How satisfied were you with the amount of help you received?” and “How would you rate the quality of services you received?” The instrument is internally consistent (alpha coefficient 0.95 in this study) and has been shown to relate satisfaction to therapy outcomes.9

Procedures

All participants completed the CPC at baseline, 3, 6, 9, and 12 months of follow-up as part of the larger survey protocol. The WCDVS steering committee conducted test–retest reliability assessments on the entire study baseline protocol because the study measures (1) had been deliberately modified from their standardized form; (2) had not been tested with this target population; and/or (3) as with the CPC, were developed specifically for this study. Test–retest reliability is an index of score consistency over a brief time period and is frequently used to assess temporal stability. Each site completed 20 retests of the entire baseline cross-site protocol (10 from the intervention group and 10 from the comparison group). The retest interval ranged from 4 to 14 days from the original baseline interview, with the average retest interval being 6.66 days (SD = 2.06).

Although the CPC measure is unique in its attempt to measure perception of integration of services and satisfaction with these services, it is presumed to have the underlying constructs of satisfaction with services and positive relationships with providers. To help assess the CPC’s construct validity, participants completed two additional self-report measures concerning perceptions of their providers (WAI) and of their services (CSQ-8). The WAI and the CSQ-8 were each administered at a single site in the WCDVS at baseline and 6 and 12 months as part of the larger survey protocol.

Analyses

Because the CPC was constructed by consensus, an exploratory factor analysis was conducted on its baseline scores to determine the factors composing the measure. The factor analysis used a Varimax (orthogonal) rotation with a principal component analysis extraction method. Initially, 26 items were included in the factor analysis with the items loading below 0.500 being considered for exclusion. Cronbach’s alphas were then run to gauge the extent of each factor’s internal consistency, with descriptive statistics computed to summarize each factor’s mean and standard deviation.

As recommended by Hayes44 and Winer45 for analyses involving only two measurement points, we assessed test–retest reliability by using the Pearson product moment correlation coefficient to estimate the intraclass correlation coefficient (which represents the percentage of variance in the measure explained by individuals). To test construct validity, the CPC scores at 6 months of follow-up were correlated with 6 months of follow-up scores from the WAI and from the CSQ-8.

One of the primary purposes of the CPC is to measure the extent to which services are trauma-informed, integrated, and holistic. To test the criterion validity of the CPC scales, 3- and 6-month CPC scores were compared between participants in the intervention and comparison groups using univariate analyses of covariance. Baseline scores were covaried along with race because the groups differed on race. The 3- and 6-month follow-up data points were selected as dependent variables in two separate univariate analyses of covariance. The 3- and 6-month follow-up data points were selected because the authors felt that these periods of time provided adequate opportunity for exposure to conditions. All analyses were two-tailed with statistical significance evaluated at the 0.05 level.

The CPC was developed with the intention that it would be used with diverse populations. This multi-site study included a large racially and ethnically diverse sample, so the psychometric properties of the CPC were next examined within different racial (i.e., Caucasian, African American) and ethnic (i.e., Hispanic, non-Hispanic) groups.

Results

Factor analysis

Items #12 (provider comfortable with self-inflicted violence) and #25 (choice in picking which medication) each had a large number of missing responses (n = 882 and n = 378, respectively) primarily as “not applicable.” For this reason, these items were omitted from the factor analysis. Individual factor loadings for each item are presented in Table 2. Although item #14 loaded on factor 1 above 0.50, it was dropped from the scale and further analyses because it was conceptually different than the other items. Item #1 was also dropped from further analyses because the results of the factor analysis indicated that the reliability of the scale could be improved if the item was removed. Although item #9 (“I felt safe and comfortable [...]”) loaded on both factors 1 and 3 at above 0.50, it was placed on factor 1 because the authors felt it was more congruent conceptually.

Table 2 Factor loadings for items on the Consumer Perceptions of Care measure

The factor analysis revealed one primary factor and three secondary ones (Table 3) that collectively accounted for more than 57% of the measure’s variability. The first factor includes 12 items assessing the extent to which services are integrated, trauma-informed, holistic, safe, and client-centered. This factor was labeled services integration, and it accounted for 40% of the total scales’ variability with an eigenvalue of 9.24. The second factor includes five items and is clearly about choice in services, including choice of providers, whether to take medications, and feeling free to complain. A third factor containing only two items appears to be related to culturally sensitivity. Finally, there was a fourth factor with three items that appears primarily related to integrated, trauma-informed assessment. This factor accounted for an additional 4% of the measure’s variance above and beyond the first three factors and had an eigenvalue of 0.99. Although this falls below the traditional eigenvalue of 1.0, the fourth factor was retained because its items represented a distinct theme not captured by the first three factors.

Table 3 Eigenvalues, percentages of variance, cumulative percentages, and Cronbach’s alphas for factors of the Consumer Perceptions of Care measure

Descriptive statistics

For the entire population completing the CPC at baseline (n = 2729), the mean total score was 76.66 (SD = 12.40; Table 3). Because each CPC factor includes a different number of items comprising its total score, each factor’s total score was divided by the number of items on the factor to produce factor item score means and standard deviations. These calculations place each factor score on the same metric, thereby facilitating comparisons between the different factor scores. Each of the scale item means were similar to the total score mean (Table 3).

Psychometric properties

Internal consistency

Cronbach’s alphas computed for all four factors were high and ranged from 0.67 to 0.92 (Table 3). The 12-item Services Integration factor had an alpha of 0.92, the five-item Choice in Services factor had an alpha of 0.76, the two-item Cultural Sensitivity factor had an internal consistency of 0.67, and the three-item Trauma-Informed Assessment factor had an internal consistency of 0.79. The overall 26-item CPC measure had an internal consistency of 0.93. These Cronbach’s alphas indicate that the CPC measure and its four associated factors exhibit a high degree of internal consistency.

Test–retest reliability

The 2-week test–retest Pearson correlation coefficients were all statistically significant at the p < 0.0001 level and ranged from 0.48 to 0.75. The values were as follows: Services Integration factor = 0.75 (p < 0.0001), Choice in Services = 0.69 (p < 0.0001), Cultural Sensitivity = 0.48 (p < 0.0001), Trauma-Informed Assessment = 0.56 (p < 0.0001), and total score = 0.75 (p < 0.0001).

Construct validity

Two factors related to clients’ perceptions of care are satisfaction with the service provider and satisfaction with the actual services received. To assess construct validity of the CPC scales, they were correlated with measures of satisfaction with the service provider (WAI) and satisfaction with the actual services in general (CSQ). Table 4 displays the correlations of the CPC scales and total score with the total scores from the WAI and CSQ. The CPC total score was significantly correlated with both the WAI total score (r = 0.34, p < 0.01, n = 140) and with the CSQ total score (r = 0.56, p < 0.001, n = 121). With regard to the CPC scales, all correlations were in the anticipated direction, supporting the construct validity of the CPC.

Table 4 Six-month follow-up correlations among the Working Alliance Inventory and the Client Satisfaction Questionnaire with the Consumer Perceptions of Care scales

Program measurement

Preliminary work on the WCDVS study data37 compared reports of services received by participants in the integrated and comparison interventions and found those in the integrated conditions were more likely to report that their services dealt with trauma and integrated substance abuse, mental health, and trauma issues than those in the comparison interventions. Therefore, to determine the validity of the CPC as a program measure, the responses of the participants in the trauma-informed integrated interventions were compared to the responses of the participants in the “services as usual” interventions. A series of independent samples t-tests and chi-square tests indicated that the intervention and comparison groups differed on only one demographic variable, race [χ 2 (3.2664) = 20.50, p < 0.001]. The groups did not differ with regard to age, ethnicity, education, employment status, residential status, relationship status, number of children, average income, or percent at or below the poverty threshold. A series of univariate analyses of covariance controlling for both baseline scores and race and testing on respective 3-month follow-up scores indicated that the intervention group endorsed significantly more favorable responses than the comparison group on all CPC scales. Means and standard deviations are presented in Table 5. The intervention group scored significantly higher on Services Integration [F (1, 1,814) = 15.68, p < 0.001, η 2 = 0.01], Choice in Services [F (1, 1,756) = 7.82, p < 0.01, η 2 = 0.01], Cultural Sensitivity [F (1, 1,941) = 12.73, p < 0.001, η 2 = 0.01], Trauma Informed Assessment [F (1, 1,958) = 45.50, p < 0.001, η 2 = 0.01], and the total score [F (1, 854) = 8.77, p < 0.01, η 2 = 0.01].

Table 5 Consumer Perceptions of Care means over time, by group

The same analyses were performed comparing the groups on 6-month follow-up scores. Consistent with the 3-month analyses, results indicated that the intervention group reported significantly more favorable responses on 6-month follow-up CPC scores compared to the comparison group (Table 5). The intervention group scored significantly higher on Services Integration [F (1, 1,550) = 13.71, p < 0.001, η 2 = 0.01], Choice in Services [F (1, 1,500) = 10.51, p < 0.001, η 2 = 0.01], Cultural Sensitivity [F (1, 1,669) = 11.11, p < 0.001, η 2 = 0.01], Trauma Informed Assessment [F (1, 1,672) = 12.82, p < 0.001, η 2 = 0.01], and the total score [F (1, 739) = 4.07, p < 0.05, η 2 = 0.01].

Analyses by race

Separate factor analyses of CPC data using the same procedures outlined above were performed on data from Caucasians (n = 1297) and African Americans (n = 726). In general, all CPC factors except for cultural sensitivity maintained their structure in both the Caucasian and African-American samples, accounting for a similar amount of variance in each sample. While the two-item Cultural Sensitivity factor held up well in the Caucasian sample, the two items failed to load on an independent factor in the African-American sample.

Results from separate factor analysis of the Caucasian and African-American data revealed one primary factor and three secondary ones that collectively accounted for more than 58% of the measure’s variability in the Caucasian sample and more than 56% of the variability in the African-American sample. The first factor represented the original Services Integration factor for both the Caucasian and African-American samples. All 12 items originally assigned to that factor loaded on it above 0.52 amongst Caucasians, with 11 of the 12 items loading for African Americans. This factor accounted for more than 40% of the total scales’ variability for Caucasians and 38% for African Americans. The second factor represented the Trauma-Informed Assessment factor, with all three items loading on this factor for both Caucasians and African Americans. This Trauma-Informed Assessment factor accounted for nearly 5% of the measure’s variance among Caucasians and 4% of the variance among African Americans. The third factor represented the Choice in Services factor, with four of the five original items loading on that scale for Caucasians and three of the five items loading among African Americans. This factor accounted for more than 4% of the measure’s variance in Caucasians and 7% of the variance among African Americans. The fourth factor represented the Cultural Sensitivity factor among Caucasians, with both items originally assigned to that factor loading on it to account for more than 7% of the measure’s variance. This factor did not replicate in the African-American sample, with neither of the two items loading on it above 0.40. There was a fourth factor from the African-American sample with items #21 and #22; choice in services and choice in service providers.

Mean item scores for the four CPC factors are reported separately for African Americans and Caucasians in Table 6. Independent samples t-tests indicated that Caucasians scored significantly lower than African Americans on the Choice in Services factor (t = 2.66, p < 0.01, d = 0.12), although they scored significantly higher on Cultural Sensitivity, t = 3.35, p < 0.001, d = 0.16. African Americans and Caucasians did not differ in their average CPC total score or on the Services Integration and Trauma-Informed Assessment factors. Cronbach’s alphas were also computed separately for African Americans and Caucasians to gauge the internal consistency of the CPC and its factors in each of the racial groups. As indicated in Table 6, the internal consistencies were similarly high across the two racial groups. All 2-week test–retest Pearson correlation coefficients computed separately for these two racial groups were statistically significant at the p < 0.001 level, ranging from 0.54 to 0.70 for Caucasians and from 0.47 to 0.82 for African Americans.

Table 6 Consumer Perceptions of Care Cronbach’s alphas, test–retest reliabilities, scale means and standard deviations by race

Analyses by ethnicity

Separate factor analyses of CPC data using the same procedures outlined above were next performed on data from persons declaring their ethnicity to be Hispanic/Latino (n = 493) and non-Hispanic/non-Latino (n = 2230). All four CPC factors maintained their same structure in both ethnic samples, with the same items loading on each of the scales in each ethnic sample. In addition, the scales accounted for a similar proportion of variance in each sample.

Results from the factor analysis by ethnicity revealed one primary factor and three secondary ones that collectively accounted for more than 59% of the measure’s variability (Table 7) among Hispanics and 57% of the variability among non-Hispanics. The first factor represented the original Services Integration factor. All 12 items originally assigned to that factor loaded on it among both ethnic groups, accounting for more than 41% of the total scale’s variability among Hispanics and 40% of the variability among non-Hispanics. The second factor represented the Choice in Services factor, with all five original items loading on that scale among both ethnic groups. This factor accounted for more than 7% of the measure’s variance among Hispanics and 5% of the variance among non-Hispanics. The third factor represented the Cultural Sensitivity factor, with both items originally assigned to that factor loading on it among both ethnic groups. This scale accounted for approximately 5% of the measure’s variance among Hispanics and 7% of the variance among non-Hispanics. The fourth factor represented the Trauma-Informed Assessment factor. All three items loaded on this factor among both ethnic groups, accounting for approximately 5% of the measure’s variance among Hispanics and for 4% of the variance among those not of Hispanic ethnicity.

Table 7 Consumer Perceptions of Care Cronbach’s alphas, test–retest reliabilities, scale means, and standard deviations by ethnicity

Table 7 contains mean item scores for the four CPC factors and total score reported separately for Hispanics/Latinos and non-Hispanics/non-Latinos. An independent samples t-tests indicated that Hispanics/Latinos scored significantly lower than non-Hispanics/non-Latinos on the Cultural Sensitivity factor (t = 2.94, p < 0.01, d = 0.14), although they did not statistically differ in their average item score on any other CPC factor or the total score.

Cronbach’s alphas were also computed separately for each ethnic group to gauge the internal consistency of the CPC and its factors in each of these groups. As indicated in Table 7, the internal consistencies were similarly high across the two ethnic groups, indicating that the CPC measure and its four associated factors exhibit a high degree of internal consistency in both Hispanics/Latinos and non-Hispanics/non-Latinos. All 2-week test–retest Pearson correlation coefficients were statistically significant at the p < 0.001 level for the non-Hispanic/non-Latino group, where they ranged from 0.55 to 0.78. In the Hispanic/Latino group, the test–retest reliability of the Cultural Sensitivity factor was too low to achieve statistical significance, although all other test–retest reliabilities ranged from 0.66 to 0.81 and were significant at either the p < 0.01 level or p < 0.001 level.

Limitations

Each item in the CPC was worded in a positive direction, which may result in a positive skew, a common problem for measures of client satisfaction with psychiatric services.46 Further, there is no control for social desirability in the measure or in the study protocol in general, and both consumer satisfaction and measures of psychological distress have been shown to be correlated with social desirability,47 although a recent study suggests this may have only a trivial effect.17 Over a third of the women in the WCDVS study responded that they were required to participate or court-ordered to participate in substance abuse or mental health services.48 This may have influenced their responses.

Because of the exploratory nature of this study, the large number of analyses, and large sample size, the findings are susceptible to spurious correlations and type I errors. Finally, the nature of the criteria for eligibility for the study and the contingencies of recruitment for the study mean that these results are specific to the sample of women in this study for whom the instrument was developed.

Discussion

The factor analysis suggests that the measure captures the key elements intended including perceived integration of substance abuse and mental health with trauma-sensitive services; choice in services, being asked about the abuse in women’s lives, and to a lesser extent, sensitivity to the cultural and ethnic identities of the women. The test–retest reliability coefficients ranged from moderate to strong (0.53–0.75).

The significant correlations with two established measures of client satisfaction suggest that the measure is also tapping into the services satisfaction domain. The CPC-Integrated Services scale showed similarly strong correlations with both the WAI and the CSQ-8. The relationship to the provider that is assessed in both the WAI and the CPC is very salient. When women were asked what they felt helped in their recovery, they often mentioned the efforts of their service providers.

I feel very safe and comfortable talking about anything to her. She also takes extra time to research other possible services that may be beneficial to me. Also, the other counselors in groups or just passing are very friendly, and supportive. I also find the groups to be very helpful. Also G. is very competent regarding substance abuse and recovery. I feel that both he and [my counselor] are working together to provide me with the best program (also medication level) that is best suited for me. [Staff member] takes the time to explain things to me regarding my addiction and recovery so that I can understand and also be an active player in making decisions about my program. (WCDVS study participant)

The factor analyses within racial and ethnic groups suggests that the measure holds up well across diverse groups. This is an important consideration in any behavioral healthcare measure, as programs seek to respond to multicultural populations.36 The exception is the two-item subscale reflecting sensitivity to cultural diversity for non-African Americans; the two items that appear in that fourth factor for African Americans are to do with choice in services, suggesting that “actions may speak louder than words” in terms of perceived respect for this group.

It appears that the CPC served its primary purpose in this study as a program measure reflecting the perception of women in the integrated condition that their services were more trauma-informed and integrated. These women also perceived that they were more often asked about the abuse in their lives. The construct validity of the CPC–Choice In Services scale was supported by an earlier study,48 which found that women currently mandated to be in treatment reported less choice in services than those in treatment voluntarily.

It is strongly recommended that the CPC measure be used in its entirety. Although two items were not included in the scales or factor analysis, these items address concerns that are critical to women with co-occurring disorders and histories of abuse experiences within the behavioral healthcare services system. The CPC offers rich opportunities for further research. For example, to enhance its usefulness as a program or process measure, it would be interesting to analyze the changes in the CPC over time, and relationship to the services used. Further psychometric work on the measure should be done with samples beyond this original sample for whom the measure was developed. The literature suggests that measures such as the CPC are not directly predictive of outcomes such as reduction of symptoms; however, this study suggests a close relationship between this measure and a measure of the working alliance, which can be an important mediator in the relationship between services and outcomes for the people served.19

Implications for Behavioral Health

As those in the behavioral health field accelerate efforts to close the gap between research and practice, tools such as the CPC are crucial as informed consumers check on implementation and feedback on quality of care for researchers and providers. This is especially important as the field incorporates and evaluates the issues of trauma and co-occurring disorders. The Substance Abuse and Mental Health Services Administration’s Center for Mental Health Services has partnered with the National Association of State Mental Health Program Directors and others to form the National Center for Trauma-Informed Care reflecting the national trend in federal, state and local behavioral healthcare systems toward addressing trauma issues. This follows a decade of encouraging services to address co-occurring substance abuse and mental health issues. An instrument such as the CPC can then serve as an important assessment of integrated services from the consumer perspective.

The CPC measure was developed by the steering committee of the WCDVS with significant input from women who have been recipients of the services evaluated. The results from this large-scale study suggest the measure is a valid and reliable index of clients’ satisfaction with innovative programs and captures unique aspects of these interventions. As integrated, trauma-informed care is more widely implemented, it is important that our measurement capabilities and our ability to capture the perceptions of those receiving services keep pace. People that receive behavioral healthcare services are racially and ethnically diverse, often have histories of violence and abuse and are often troubled by substance use disorders. This study suggests that their perceptions of integrated care can be accurately and reliably assessed.