Studies of change in colleges and universities often consider faculty responses to change or innovation a key influence on the success, or lack of a success, of an educational change effort. Faculty buy-in, scholars suggest, is required if change efforts are to succeed. Closely-held professional values such as autonomy in teaching and research and collegial and collaborative decision-making processes must be respected and accommodated if faculty resistance to innovations is to be overcome (see, for example, Cameron and Tschichart 1992; Eckel et al. 1999; Kezar 2001; Lattuca and Stark 2009).

Despite this emphasis on the role of faculty, and recent interest in the cultural and sociocognitive dimensions of change in higher education settings (e.g., Kezar 2001; Bensimon 2005), scholars have given relatively little systematic attention to the role of academic disciplines and fields—as manifestations of faculty culture, values, and habits of mind—on educational innovation and change. Rather, studies of organizational change in colleges and universities tend to treat faculty as a monolithic category, assuming that faculty across the disciplines are connected by a shared set of professional values such as autonomy and collegiality.

In their comprehensive overview of the literature on academic disciplines in higher education, however, Smart et al. (2000) found a conceptual consensus that a faculty member’s academic discipline or field is the single greatest influence shaping her professional behaviors and attitudes. Research consistently reveals disciplinary patterns in faculty interests, attitudes, and activities related both to research and teaching. Variations in teaching orientations might be expected to influence faculty members’ acceptance of efforts to improve undergraduate education through curricular or instructional innovations. For example, faculty groups might vary in their receptiveness to a particular educational reform based on differences in their beliefs about the purposes of education (Stark et al. 1990), the importance they assign to different educational goals such as the application and integration of knowledge (Smart and Elton 1975; Smart and Ethington 1995), preferred teaching practices (Einarson 2001), and the kinds of knowledge validation strategies employed in their fields and thus perceived to be essential for students to learn (Donald 2002). (For extensive reviews, see Braxton and Hargens 1996; Smart et al. 2000.)

Smart et al. (2000) suggest, however, that finer-grained categories are needed to capture differences in faculty attitudes and behaviors within disciplinary groupings. As the number of disciplinary specialties grows, the conventional and increasingly comprehensive categories such as “biology,” “sociology,” or “engineering” potentially mask considerable variations in attitudes and behaviors among scholars within these broad disciplinary categories. Relying on John Holland’s theory of occupations and environments (1966, 1973, 1985, 1997), they suggest that faculty within a given field of study may display distinctive professional attitudes and behaviors. An examination of the Holland classifications for different groups of engineering faculty, for example, reveals that electrical and mechanical engineering are classified as Realistic occupations, while civil and chemical engineers fall into his Investigative category. These classifications are hypothesized to be associated with different patterns of preferences and characteristics.

Holland’s Theory of Occupations

According to Holland (1966, 1973, 1985, 1997), most people can be classified into one of six theoretical or ideal personality types (Realistic, Investigative, Artistic, Social, Enterprising, Conventional) that influence their choice of an occupation. Individuals are presumed to choose occupations consistent with their motivations, knowledge, personality, and abilities, and once in an occupation individuals are supported and rewarded for those attitudes and behaviors. Thus, Holland proposes six environments that correspond to his six personality types. An environment is “the situation or atmosphere created by the people who dominate in a given environment” (p. 41). The way individuals respond in a situation is thus partly a function of their situation and partly a function of their behavioral repertoires—the distinctive pattern of interests, competencies, and preferred activities associated with their personality type.

Smart et al. (2000) note that “abundant evidence exists that faculty in academic departments, classified according to the six academic environments proposed by Holland, differ in ways theoretically consistent with the postulates of Holland’s theory” (p. 83). Their analyses of national data from a study of the academic profession produced strong support for the assumption that academic environments are socializing mechanisms, reinforcing and rewarding different patterns of abilities, interests, and values while simultaneously discouraging others. They suggest, accordingly, that Holland’s theory of occupations and environments is a fruitful theoretical foundation for studying organizational dynamics and faculty behaviors.

The existence of multiple personality types and environments within larger disciplinary classifications adds a new layer of complexity to the study of organizational and curricular change in higher education. Holland’s theory suggests that faculty members’ responses to proposed or actual changes within a single organizational unit, such as a school of engineering, will vary systematically by academic specialization because these specialization areas are distinctive environments dominated by particular personality types.

A national study of the impact of a new set of outcomes-based accreditation criteria in engineering programs offers an opportunity to test the usefulness of Holland’s theory for understanding program-level variations in faculty responses to that educational change effort. The new engineering standards specify 11 student learning outcomes and mandate assessment of those outcomes in a process of continuous improvement. The standards are thus expected to stimulate significant restructuring of curricula, instructional practices, and assessment activities in engineering (ABET 1997). In this study, we examine engineering faculty members’ responses to this innovation, asking whether the curricular, pedagogical, and professional development choices they made after the new accreditation criteria were implemented reveal patterns consistent with their respective Holland types. The following research question guides our study:

Do engineering faculty members from different academic environments (defined by Holland’s typology) vary in their responses to changing curricular and pedagogical requirements?

The answer to this question will contribute to the literature on organizational and curricular change in several ways. First, it will provide evidence from a nationally representative sample of engineering faculty regarding the influence of academic specialization on responses to an educational reform effort in progress. We note here that ignoring the new accreditation criteria is not an option for the vast majority of engineering schools and programs. More than 95% of all engineering undergraduate programs in the United States are ABET-accredited, and graduation from an accredited program is a condition for professional licensure.

The study asks, too, whether forces more fundamental than academy-wide values (e.g., autonomy and collaborative decision-making) may be at work in shaping how faculty members respond to pressures to change. Might faculty members’ receptivity to change be driven also by fundamental personal and professional values and beliefs? Finally, few multi-site studies of curricular change are available, and even fewer explore the simultaneous response of faculty to the same change effort. Findings from this study may shed light on the role of external catalysts, such as a significant change in the job market for a profession or a radical shift to an outcomes-based accreditation model, in curricular change. Hypotheses about the impact of such external influences on curricula (Slaughter 2002; Lattuca and Stark 2009) are largely untested. This study provides the opportunity to explore the question of whether academic environments mediate external pressures for change.

Methods

The study rests on a conceptual framework hypothesizing that engineering schools’ responses to the change in accreditation criteria (known as “EC2000”) will manifest themselves in modifications in engineering programs, faculty culture, and administrative policies and practices. These program-level changes will, in turn, affect the nature of the student learning experience in- and out-of-class, and potentially, student learning outcomes as well. Figure 1 portrays these hypothesized relationships and depicts the conceptual framework for the Engineering Change study (Lattuca et al. 2006), which provided the data for the present analysis. The findings of that study revealed substantial program changes, changes in student learning experiences, and in student outcomes after the implementation of the new EC2000 criteria. This study focuses on a subset of program changes: the curricular and instructional changes made by faculty in response to EC2000.

Fig. 1
figure 1

Conceptual framework for the engineering change study

Design, Population, and Sample

This national study examined the impact of a new set of outcomes-based accreditation criteria for engineering programs. The Engineering Change study (Lattuca et al. 2006) collected data from several sources, including graduating seniors, alumni, faculty members, program chairs, deans, and employers. The faculty portion of that larger study provides an opportunity to test the usefulness of Holland’s theory for understanding program-level variations in faculty responses to that change effort. Thus, what follows below is specific only to the faculty portion of the study, which was a cross-sectional, mail-and-web-based survey design.

The school-level target population for the study (sampling involved a two-stage design; see below) included 1,241 currently ABET-accredited engineering programs in seven targeted disciplines in 2002 (aerospace, civil, chemical, computer, electrical, industrial, and mechanical; those disciplines produce 83% of the undergraduate engineering degrees awarded each year). Of those 1,241 institutions, 244 offered at least two of seven targeted undergraduate engineering programs that have been accredited by ABET since 1990 or earlier. Those institutions offered 1,024 programs that were accredited in 1990 or earlier and thus met the sampling criteria.

The study’s sampling design was a two-stage, 7 × 3 × 2, disproportional, stratified random sample. In the first stage, 40 institutions were randomly selected from the 244 institutions meeting the inclusion criteria. Institutions were selected using three strata: (1) they offered two or more of the seven targeted engineering disciplines, (2) their choice of three options in the phased-in, EC2000 review cycle (early adoption of the EC2000 accreditation review criteria, on-time adoption, or deferred adoption), and (3) whether a school had participated in a National Science Foundation Engineering Education Coalition (which promoted curricular and instructional change). Adjustments in the design (e.g., oversampling smaller disciplines and including HBCUs and HSIs) ensured a nationally representative sample of programs.

The sampling design’s final set of institutions and programs studied included 203 engineering programs at the 40 institutions. The participating schools include 23 public and 17 independently controlled institutions. Thirty schools award doctoral degrees; six are master’s degree institutions, and four are baccalaureate or specialized institutions. Fourteen participated in an NSF Engineering Education Program coalition during the 1990s.

Data Collection Procedures

Project staff solicited and received endorsements for the study from the six professional societies that represent engineers in the targeted disciplines (the American Institute of Aeronautics and Astronautics, the American Institute of Chemical Engineers, the American Society of Civil Engineers, the American Society of Mechanical Engineers, the Institute of Electrical and Electronics Engineers, and the Institute of Industrial Engineers), and from the American Society for Engineering Education. Staff contacted the deans of the schools/colleges/departments of engineering selected to participate to explain the study’s goals and procedures and to invite their school’s assistance. Participating institutions received a modest stipend to defray the costs of assembling the names and contact information for their chairs, faculty members, alumni, and seniors. Participating institutions also received their institutional data (stripped of identifying information) for local assessment and planning purposes.

An outside contractor undertook administration, data collection, and data management for the survey. Some research suggests that response rates vary between web- and paper-based data collection (Carini et al. 2003; Cobanoglu et al. 2001; Mehta and Sivadas 1995; Porter 2004; Schaefer and Dillman 1998; Shannon and Bradshaw 2002; Weible and Wallace 1998). Consequently, individuals had the option of choosing their preferred method of response in order to maximize our response rate.

Survey distribution followed Dillman’s (2000) guidelines. The survey packets, sent personally addressed and by first-class mail, included a cover letter from the project director, a statement advising recipients of their rights as human subjects, an optically readable survey instrument, and a postage-paid return envelope. The letter of invitation and reminders explained the purposes of the study, advised recipients of the study’s endorsements by the major professional engineering societies, and contained an e-mail address and phone number where answers to questions could be obtained. Each mailing also provided a web address where respondents could access the web-based version of the survey. All study procedures and materials were approved by the home institution’s Office of Research Protections.

All full-time faculty members in the targeted programs in the 40-institution sample were invited to participate. Of the 3,303 faculty members who constituted the target samples for this study, 1,272 (42%) responded. Faculty respondents with more than 20% missing data were excluded from all analyses. For all other cases, missing data were imputed (Allison 2001). Where Chi-square Goodness-of-Fit tests indicated some response bias, weights were developed to adjust for any unrepresentativeness. Weighting adjustments for minor response bias were made for the characteristics of respondents’ institutions (size, type of control, wealth, and participation in an NSF coalition). No adjustments were made for respondents’ personal characteristics on the assumption (and some supporting evidence) that those individual characteristics are among the factors that shape a discipline’s environment, values, practices, and policies (see Schuster and Finkelstein 2006, for evidence of variations in teaching practices by gender and ethnicity.) Adjustments were also made for variations in faculty response weights across 39 institutions (one institution did not provide a faculty contact list).

Instrument and Variables

Using procedures described elsewhere (Lattuca et al. 2006), staff developed the faculty survey. The questionnaire begins with a brief section that collects demographic characteristics, such as information on years of teaching experience, institutional tenure, and engineering discipline. The instrument also solicits additional demographic data (for example, on respondents’ gender, race/ethnicity, and educational background).

Part I of the instrument focuses on faculty members’ teaching and asks respondents to think about a particular undergraduate course they teach more or less regularly. After providing information on the characteristics of the focal course (such as its level, enrollment, and whether the course is an engineering design course, required or elective/optional, or a capstone course), faculty were asked to respond (“keeping that course in mind”) to a set of questions about any changes they may have made since first teaching that course in the emphasis they give to 14 curricular topics associated with the EC2000 learning outcomes. Respondents used a five-point scale, where 1 = “significant decrease,” 2 = “some decrease,” 3 = “no change,” 4 = “some increase,” and 5 = “significant increase.” Respondents could also choose a “not applicable” option where appropriate. Faculty members then indicated the extent to which each of a dozen possible factors influenced any changes in course emphasis they may have made in their course. Respondents also reported on any changes they had made in their use of ten teaching methods (such as their use of groups, design projects, open-ended problems, case studies, lectures, and text-based problems).

Part II of the survey requested information on faculty members’ current professional development activities and any changes in those activities over the past 5 years, perceptions of changes in the faculty reward system over the past decade, and their perceptions of current curriculum planning practices in their program. Finally, faculty members indicated their level of support for and participation in assessment activities, their knowledge of EC2000, and whether EC2000 had any influence on their knowledge of their program’s strengths and weaknesses.

Based on Holland’s typology and recommended environmental classifications, electrical and mechanical engineering faculty were categorized as Realistic environments; aerospace, chemical, and civil engineering faculty as Investigative; and industrial engineering faculty as Enterprising (computer engineering was excluded because its proper classification was ambiguous) (Smart et al. 2000). These three environmental clusters constituted the groups (dependent variable) whose similarities and differences were assessed using statistical procedures described below.

The analytical variables in this study fell into two groups—control (covariates) and independent variables. To remove potentially confounding effects related to the characteristics of the institutions that were home to the engineering disciplines and faculty members under study, controls were made for institutional size, type of control, wealth, Carnegie classification, and participation in an NSF coalition. The independent variables included seven factorially derived (principal components analysis) scales related to curricular focus and instructional methods, and a single-item measure of faculty engagement in professional development activities to enhance content knowledge. The internal consistency (alpha) reliabilities for the scales range from .49 to .85; five of the eight alphas exceed .72. The scales reflect faculty reports of change in their course emphases on basic math and science; experimental skills; social, economic, and environmental implications of engineering designs; standards and ethics; and use of instructional approaches such as presentations and group work. Table 1 provides more complete descriptive, operational, and psychometric information on each of these variables.

Table 1 Study variables

Analytical Procedures

Multiple-group discriminant function analysis (MDA) assessed changes reported by faculty members in Holland environment types using the eight measures of curricular focus, instructional techniques, and professional development. Based on Holland’s typology and recommended environmental classifications, three groups were formed: Realistic (electrical and mechanical engineering faculty; Investigative (aerospace, chemical, and civil engineering faculty); and Enterprising (industrial engineering faculty). Computer engineering faculty members in the sample were excluded because their proper classification was ambiguous.

A sequential discriminant function analysis was used to assess the contribution of the independent variables to explaining group separation over and above that of the control variables. In the first run, the control variables (size, type of control, wealth, institution type, and participation in an NSF coalition) were entered as the only variables. The eight independent variables were then added to the model. The canonical correlation (similar to the R statistic in multiple regression) measures the amount of variance a criterion measure shares with its predictor variables and indicates the explanatory power of the model. The change in the canonical R indicates the unique explanatory power contributed by the second set of variables.

Interpretation of findings relied on the standardized canonical discriminant function coefficients (analogous to beta weights in a multiple regression), the pooled within-group correlations between the independent variables and each standardized discriminant function, and the differentiation among group centroids in two-dimensional space (Tabachnick and Fidell 2001). A classification analysis (with equal probabilities for all three groups) was used to assess the extent to which the two functions could successfully assign faculty members to their known Holland group.

Finally, and to assess the validity of the discriminant functions, a double cross-validation analysis was performed by randomly assigning cases to two independent groups: (a) a calibration sample and (b) a validation sample. In the first cross-validation, the calibration sample is used to develop the discriminant function coefficients, which are then applied to members of the validation sample in a classification analysis that tests the ability of the discriminant functions to correctly classify cases into their known Holland types. A “double cross-validation” then reverses the role of each sample, repeating the above procedures but switching the “assignment” of each sample. In the second analysis, the first-round validation sample (Group 2) is used to generate the function coefficients, and the first-round calibration sample (Group 1) is used as the validation sample in the classification analysis.

All findings were evaluated in light of the hypothesis that faculty changes in curricula, instruction, and professional development activities would be consistent with the expectations based of Holland’s theory.

Findings

The full discriminant model produced two statistically significant discriminant functions with canonical correlations of .27 and .20, respectively. The first function accounts for 64% of the total power of the model for separating the Holland types. The second accounts for the remaining 36%. Each function is statistically significant at p < 001.

Comparison of the canonical correlation coefficients for a model containing only the control variables with that of a model containing both control and discriminant variables, and also contrasting the percentage of cases correctly classified in their known Holland group, indicates (with one exception) that the addition of the discriminant variables produces a substantial increase in power for predicting Holland type membership. The exception is the Investigative faculty group (Table 2). These findings suggest that faculty members in different engineering subdisciplines differ in meaningful ways in the changes they report making in their curriculum, pedagogy, and professional development activities.

Table 2 Comparison of the control-only and the full discriminant function models

Table 3 gives the group means, standard deviations, and the MANOVA F-ratio for each of the discriminant variables for the three groups. At least two of the three groups differed significantly on three of the eight change variables. The statistical significance of the Wilks’ lambdas associated with the two discriminant functions (.890 and .959, p < .001 for both) support the hypothesis that differences exist among the three Holland types.

Table 3 Univariate statistics for the Holland groups on the curriculum, pedagogy, and professional development scales

The standardized canonical discriminant function coefficients in Table 4 indicate the relative importance of the variables in predicting Holland group members. Table 4 also includes the structure coefficients, which give the correlation between each variable and each discriminant function. These values may be interpreted like factor loadings in a factor analysis and help define the nature of the underlying construct that separates groups on this function.

Table 4 Standardized canonical discriminant function coefficients and pooled within-group correlations between discriminating variables and standardized canonical discriminant functions

The first function, labeled “Engineering in Context,” taps increases in the emphasis faculty members reported giving to professional ethics, professional responsibility, and understandings of the societal implications of engineering solutions; to the use of active and collaborative pedagogies (such as design projects, group work, and hands-on experiences); and to reductions in the use of traditional instructional approaches (lectures and textbook problems). The second function, labeled “Organizational Features,” is defined largely by features of the institutions where faculty members teach and suggests the role that institutional characteristics have in distinguishing faculty members’ Holland type.

Departments with Enterprising and Investigative environments scored significantly higher on the first function than the Realistic departments. On the second function, the Enterprising departments scored substantially higher than either the Realistic or Investigative departments. Each environmental cluster occupied a distinct quadrant of the 2 × 2 matrix when the two discriminant functions were orthogonally juxtaposed, and the locations of the group centroids were consistent with expectations based on Holland’s theory (Fig. 2).

Table 5 contains the centroids (means in multidimensional space) for each group on each discriminant function. As can be seen there, the Enterprising environments scored somewhat higher on the first function (Engineering Context) than departments with an Investigative environment, which, in turn, were higher than departments with Realistic environments. On the second function (Organizational Features), Enterprising environments scored substantially higher than Realistic environments, which scored slightly higher than Investigative environments. As can be seen in Fig. 2, however, each environmental cluster occupies a distinct quadrant of the 2 × 2 matrix created when the two discriminant functions are juxtaposed orthogonally.

Table 5 Group centroids on two discriminant functions
Fig. 2
figure 2

Centroids of three Holland groups on the two discriminant functions

Table 6 reports the Holland group sizes within each sample when the total sample was randomly split into two halves for purposes of the double cross-validation procedures. The all-variables model, with Group 1 as the calibration group, identified two statistically significant discriminant functions with canonical correlations of .27 and .20, respectively. With Group 2 as the calibration group, the discriminant analysis again produced two statistically significant functions, with canonical correlations of .31 and .25, both slightly higher than those produced with Sample 1.

Table 6 Group sizes for cross-validation

Tables 7 and 8 suggest that the names of the discriminant functions appear to be accurate as the first and second function correlate mostly with the faculty predictor variables and institutional variables, respectively. The relative contribution to “Engineering Context” (Function 1) resulting from faculty members’ reports of changes in the emphasis they give to the implications of engineering solutions for society, professional ethics, and professional responsibility, as well as reported changes the use of active and collaborative pedagogies remained consistent across the discriminant functions. The traditional pedagogies scale, which measures changes in the emphasis on lectures and textbook problems, was inconsistent.

Table 7 Standardized discriminant function coefficients in cross-validation analyses
Table 8 Structure coefficients in cross-validation analyses

Table 9 compares the results of the cross-validations of the discriminant functions for the Calibration Sample (top half of the table) and the Validation Sample (bottom half). The results indicate that, for both groups, the two functions raise the classification accuracy, in most cases, well above the 33% one might expect by chance alone. In addition, the modest drops in the percentages of correct classifications when classifying Validation Sample members indicates considerable stability in the functions and their validity for classifying students into their known Holland types. With one exception, the drop in the classification percentages in the first cross-validation is less than 10%, providing evidence that the discriminant functions are valid in classifying faculty members into Holland types. The contrasts are more mixed in the second cross-validation.

Table 9 Percentage of correct classifications in double cross-validation analyses

Limitations

Like all studies, this one has its limitations. First, the faculty data are cross-sectional rather than longitudinal. They are, moreover, retrospective: Faculty members had to rely on their recollections of the curricular emphases and teaching practices they used when first teaching their focal course. Any subsequent changes may have been initiated several years prior to the survey.

Second, findings are generalizable only to faculty members in the disciplines studied. Although these fields award more than 83% of all undergraduate engineering degrees annually, faculty members in other disciplines may or may not have made curricular and pedagogical changes similar to those reported by respondents in this study. Indeed, based on Holland’s theory and given the findings reported here, one might reasonably expect variations from this study’s findings in other engineering subfields.

Third, two of the scales (“Applied Skills” and “Traditional Pedagogies”) have internal consistency (alpha) reliabilities (.52 and .49, respectively) notably below the conventional standard of .70. While such internal consistency problems certainly reduce the predictive power of a scale, these two measures nonetheless had statistically significant structure coefficients (Table 8), helping to validate the separation of the two groups on both functions. One might reasonably argue that such contributions constitute prima facie evidence that the scales have predictive value. The marginal reliability coefficients probably underestimate the contributions of these two measures, suggesting that their contributions to the findings are most likely understated in this study.

Fourth, respondents to the Engineering Change faculty survey reported on curricular and pedagogical changes (or lack thereof) in an upper-division course that they teach regularly. Those courses, when taken together in a study such as this one, provide a valid and reliable window on engineering education nationally. It is possible, however, that faculty members might report more change if asked to comment on all their courses.

Finally, the areas of change on which the Engineering Change study focused may only partially reflect the course-related changes faculty members made. Curricular and pedagogical choices are complicated matters in both content and process, the instructional approaches and curricular topics covered in the survey may not fully cover the range of changes that faculty made in their engineering courses. Similarly, the items and scales used in this study may only partially reflect the complexity or degree of change in the areas studied.

Discussion

A number of scholars contend that faculty members’ academic field is among the strongest influences on their professional attitudes and behaviors (Braxton and Hargens 1996; Smart et al. 2000; Lattuca and Stark 1994; Lattuca and Stark 2009). The findings presented here add to that literature and support this assertion. The findings also provide, however, more nuanced understandings of disciplinary and institutional influences on faculty attitudes and behaviors.

Distinguishing Faculty at the Subdiscipline Level

Smart et al. (2000) suggest that higher education research should employ finer-grained categorizations of academic fields than those currently in use. Encompassing categories (such as the humanities, or social sciences) and even clustering into a single group subdisciplines within a diverse domain of knowledge (e.g., physics or agriculture) can mask important differences among faculty. By subdividing engineering faculty in a nationally representative database into three Holland types, which aggregate faculty based on similarities and differences in motivations, knowledge, personality and ability, this study provides moderate evidence to support the use of Holland’s classification scheme for distinguishing among college and university faculty within the same broad academic field, and thus the argument for more nuanced faculty classification schemes.

Engineering faculty members are not highly similar across the fields within that profession. The discriminant function analyses revealed variations (sometimes large, sometimes small) in the emphasis faculty in three Holland types placed on professional and social contexts in their courses and by their use of active learning pedagogies in their classrooms. These distinguishing features are particularly noteworthy in light of recently mandated changes in undergraduate engineering education. The new EC2000 accreditation criteria for undergraduate engineering programs, phased in between 1997 and 2001, were expected to result in widespread curricular and instructional changes as engineering programs reorganized courses and pedagogies to focus on the development of 11 learning outcomes (Prados et al. 2005). The Engineering Change study, which evaluated the state of undergraduate engineering education before and after the implementation of the EC2000 criteria, demonstrated that the new EC2000 standards contributed to significant changes in engineering programs, student learning experiences, and student learning outcomes (Lattuca et al. 2006). The findings also revealed differences among engineering subfields (such as mechanical and computer engineering), but small numbers of respondents in some of the seven subdisciplines studied precluded strong conclusions about their response to the accreditation criteria. Further research would be needed.

The application of Holland’s typology permitted the aggregation of engineering faculty in six different subfields into three Holland types: Realistic (electrical and mechanical engineers), Investigative (chemical and civil engineering, and Enterprising (industrial engineers) and thus allowed further analyses of differences at the subdiscipline level. The differences found suggest that faculty in these subdisciplines, despite pressures to respond to the EC2000 mandates, did so to varying degrees—and the variations in their responses are consistent with expectations based on their Holland type.

Enterprising and Investigative engineers reported the greatest increases in emphasis on professional and ethical responsibilities, and the societal and global implications of engineering solutions, in their courses. This finding is consistent with Holland’s (1997) description of individuals in Enterprising fields as tending to view problems in terms of their social influence and to value the development of leadership qualities. Investigative engineers reported somewhat less change than Enterprising engineers, but more than Realistic engineers, who reported the least change in their emphasis on professional and contextual knowledge and abilities. Holland’s theory predicts this array. A focus on professional and social issues is less characteristic of Investigative individuals, who place greater value on scientific and scholarly dimensions and solutions of problems, and of Realistic individuals, who prefer structured solutions to problems.

The “engineering in context” function also distinguished engineers in different Holland types by their use of active learning techniques in the classroom. The EC2000 accreditation criteria stressed the development of students’ skills in teamwork, communication, and design. The development of such skills presumably requires changes in instructional techniques as faculty offer students in-class opportunities to develop and practice working in teams, making oral and written presentations and reports, and designing systems, processes, or components. Again, the Enterprising engineers were most responsive to the new accreditation mandates, reporting greater levels of emphasis on active learning in comparison to their colleagues in Realistic and Investigative fields. Holland’s typology notes that individuals in Realistic fields place a high value on technical and mechanical knowledge and skills, while Investigative individuals stress analytical and thinking skills. Such commitments might make Realistic and Investigative faculty less willing or perhaps less sure of how to engage students actively in class discussions, presentations, group work, and applications of technical knowledge.

Limitations in the dataset, which was designed specifically to measure the effects of EC2000’s implementation, may have constrained the ability to identify other important distinguishing features of engineering subfields. Nonetheless, the findings of this study are consistent with Holland’s predictions and lend support to the claim that neglect of differences among subfields may result in under- or overestimating faculty members’ willingness to accept undergraduate education reforms, even those mandated for professional accreditation.

The Role of Institutional Context

Institutional features are often used as control variables in studies of faculty work and student learning. In the present analyses, we followed this pattern, including a set of institutional characteristics (size, wealth, control, Carnegie classification, and NSF Coalition participation) as covariates in the multiple discriminant function analyses. We did not expect to find that those institutional characteristics would largely define the second of the two discriminant functions identified. The control of the institution (whether it is public or private) and institutional type (based on the Carnegie classification) were the dominant coefficients on that function. This finding may be a reflection of the nature of engineering education in the US, which is dominated by large, public universities and suggests that conventional descriptors of institutional differences may be more influential in shaping departmental environments within an institution than some have suggested (see, for example, Pascarella and Terenzini 2005).

The identification of the institutional characteristics function, however, is also consistent with theory and research. Austin (1991), Smart and Ethington (1995), and Lattuca and Stark (1994) all argue that while the culture of academic fields is a strong influence on faculty work, institutional contexts mediate disciplinary preferences. Faculty members, in other words, modify their teaching and research activities to suit local conditions. Lattuca and Stark (1994) identified this tendency to foreground local contexts among national, discipline-based faculty task forces charged with reforming the undergraduate major. In an analysis of educational goals, Smart and Ethington (1995) substantiated the influence of both institutional type and academic field on goal preferences among college and university faculty. This finding is not particularly unexpected and it is somewhat reassuring. Although strongly influenced by their academic training and continuing involvement in their field of study, college and university faculty members are also cognizant of local needs and contexts and adjust their courses and curricula accordingly.

Conclusions

The findings of this study support several conclusions. They augment the growing body of research indicating that disciplinary environments are important factors in shaping faculty members’ attitudes and behaviors while also suggesting the value of analysis at the subdiscipline level whenever possible. The findings, moreover, provide moderate support for the concurrent and construct validity of Holland’s theory of occupational careers and its utility in studying organizational behavior. This findings support Smart et al.’s (2000) proposition that the more-aggregated disciplinary categories typically used in educational research may fail to capture some of the subtleties of within-field variations in values, customs, and dispositions relating to curriculum and pedagogical practices and change. Our analyses of faculty within engineering (which is often treated as a single academic field) reveal variations across sub-disciplines in faculty inclinations to focus on the intersections of engineering practice and societal and ethical issues and the use of active learning techniques.

These findings may be particularly important for two reasons. First, they suggest potentially important philosophical differences (and their curricular and pedagogical manifestations) within a field that play a significant role in domestic and international economic, environmental, and quality-of-life issues. How these differences play themselves out in engineering education over the next decade is a matter of some consequence. Second, the findings suggest that resistance (or openness) to particular educational reforms has disciplinary roots. In addition to the relatively uniform values faculty from all disciplines attach to such matters as autonomy in instructional and research decision-making, or to the importance and value of collegial decision-making, this study’s findings suggest that other values are also at work. The evidence indicates that a full appreciation of the dynamics of organizational change may require finer-grained theories and research than those currently available or in widespread use.

Practically, the findings are useful to those who, seeking to encourage educational innovation, will recognize the need to tailor strategies for particular groups of faculty with particular predilections. Enterprising engineers, this study suggests, were more open than their colleagues in Realistic fields to the recommendation of the engineering accreditation agency that they increase curricular emphasis on professional, ethical and social issues. An armchair analysis of the preferences of different groups of faculty based on Holland type could translate into the use of different formal and informal communications about desired curricular and instructional changes for distinctive groups of faculty. As important, the findings suggest that different groups of faculty will require different levels and kinds of assistance to develop and implement the desired curricular and instructional changes. Deeply embedded values and ways of thinking must be surfaced, reflected upon, and addressed if curricular and instructional change is to occur and be sustained.