Introduction

Faculty positions in Communication are not created equally. Number of courses, service responsibilities, and research expectations are merely three factors that can vary for any given appointment. The Carnegie Commission (www.carnegiefoundation.org/classifications) classifies universities and colleges along a number of criteria—one such classification is if a university awards doctoral degrees in arts, sciences, engineering, law, and/or medicine. The current paper examines the publication rates and position prestige of over 1,500 faculty members in Communication whose primary appointment is in a doctoral-granting program in the United States. The current analysis assumes faculty members affiliated with doctoral programs are expected to publish in refereed journals in their area of expertise as well as to advise graduate students, many of whom eventually search for academic jobs.

Previous research on faculty productivity (e.g., Stephen & Geel 2007) is informative but casts a wide net when counting publications for communication scholars and includes individuals whose primary responsibilities are teaching and undergraduate advisement. The field of Communication would benefit from a more fine-grained examination of faculty publication numbers for individuals whose position requires research productivity and doctoral-level advisement. Tenure, promotion, and hiring committees could learn the distribution of publication rates for research faculty overall and by years since earning the doctoral degree and by rank.

Ranking prolific scholarship

The practice of ranking scholars and counting publications in Communication is hardly new and early efforts date back at least 20 years (e.g., Hickson, Stacks, & Amsbary 1989). A recent article by Stephen (2008) discusses the slippery science of ratings in Communication and encourages raters to consider several important issues when attempting to rate individual faculty members, universities or programs. For example, a central consideration is the criterion factor of ratings. What exactly should be counted? For example, if only counting journal articles (and not other scholarship), then what journals and publishing years are included in the analysis? What counts as a publication (e.g., book reviews vs. research article) and what database is referenced? Consider three recent examples of quantifying scholarship in Communication. Bunz (2005) counted publication numbers across a 5 and one-half year time period (1999–2004) for eight journals (National Communication Association (NCA) & International Communication Association (ICA) journals) and listed the most prolific authors by rank for this time period. Hickson, Bodon, and Turner (2004) used authorship for 24 Communication journals from 1915 to 2001. A recent study by Stephen and Geel (2007) represents the most comprehensive attempt to assess publication numbers by including 87 journals during publishing years 1915 through 2005. Thus, Bunz’s analysis would be considered the most restrictive in scope, while Stephen and Geel’s (2007) is the most comprehensive and each of these three analyses, in their defense, had a distinct purpose. The Stephen and Geel (2007) study does not “name names” by listing prolific authors and instead examines normative publication productivity by year of degree completion. By contrast, the Hickson et al. article ranks the top 100 scholars and the top departments in the field who host prolific authors.

The current paper represents the first attempt to limit its analysis to research faculty in Communication. There are certainly prolific scholars affiliated with programs that do not offer the doctoral degree, but it is our contention these individuals are exceptional, as it is unlikely faculty members with a heavy teaching load, a characteristic of these programs, have the time and resources to publish on a regular basis. Thus, previous efforts (Funkhouser 1996; Stephen & Geel 2007) may serve to underestimate normative publication rates for research faculty by including the population of individuals who are members of the National Communication Association or including all those who have published at least one article in the journals or periodicals included in the analysis (Stephen & Geel 2007). The current study will measure publication rates of individuals affiliated with one of the nearly 100 doctoral programs in Communication (see Barnett, Danowski, Feeley & Stalker 2010) and findings therefore generalize to the subset of over 2,000 faculty members whose primary responsibility is to produce original scholarship. The following research question is posed:

RQ1a: What are normative publication rates for research faculty in Communication?

Stephen and Geel (2007) further sub-divide analyses by year of degree completion. This practice is warranted and will be replicated in the current study. It stands to reason one who earned his/her Ph.D. in 1982 is likely to have published more articles than one who graduated in 1996, especially considering the length of time required to get an article through the editorial process and into print. The following research question relates to time from degree completion and rank:

RQ1b: What are normative publication rates for research faculty in Communication by degree cohort and faculty rank?

Publishing regularly in high profile journals is still the currency whereby individuals and departments are evaluated.

Ranking programs by centrality in the hiring network

Higher education has long been interested in ratings and rankings for programs of study. The many attempts at rating programs have included various metrics and the field of Communication is no exception. For example, in 2004, the NCA (NCA; www.natcom.org) used faculty members’ rankings of programs across the discipline to rate programs in nine areas of scholarship (e.g., interpersonal, organizational, health). Thus, the NCA program, like many rating systems, relied on reputation to rank programs. A second example is a study by Neuendorf, Skalski, Atkin, Kogler-Hill & Perloff (2007) who asked faculty members (n = 1,264) and department chairs (n = 248) to list the top three Communication programs that offer a doctoral degree. Program nominations were weighted and 27 programs who received votes were ranked, with University of Wisconsin, Madison earning the top position, followed by University of Texas at Austin. A third method of program ranking, recently employed by Barnett et al. (2010), uses faculty hiring and placement data as the criterion factor. The authors suggest placement of new PhDs in top-tier programs and hiring from top-tier programs should be the gold standard in rating graduate programs in the field. In summary, methods to rank graduate programs can typically be sorted into two categories (see Stephen, 2008): (1) reputational or subjective appraisals (see also Edwards & Barker 1979; Edwards, Watson, & Barker 1987) or (2) assessment through performance indicators, such as publications, grants or books (Hickson et al. 2004).

The current study uses centrality in the faculty hiring network as the criterion measure for program quality. Although the use of faculty hiring as a proxy for prestige is new to Communication, it has been a measurement staple in several social science fields, such as Sociology (Burris 2004), Political Science (Masuoka, Grofman, & Feld 2007) and Economics (Pieper & Willis 1999). A legitimate measure of department quality is its ability to place and have its graduates promoted into high profile departments and universities. It is the primary purpose of a doctoral program to prepare and place students in research positions so they may conduct social scientific research (Barnett et al. 2010). Barnett et al. argue,

A more fundamental component in a given doctoral program is the quality of graduate education in the form of courses, independent research, and preparation for careers as Communication scholars. Thus, a legitimate measure of department quality is its ability to place and have its graduates promoted to high quality (i.e., highly ranked) departments and universities (parentheses added, p. 390).

Measuring centrality of a program relies on network analysis and a department is considered prominent if it is particularly visible to other departments in the network (Knoke & Burt 1983). A particular form of prominence is prestige. A prestigious program can be defined as one that is the recipient of extensive ties (Wasserman & Faust 1994). The tie, in this case, is the hiring of a faculty member with a doctorate from one institution to another. Prestige is a measure of centrality where the direction of the relationship is known, such that graduates sent from one program can be distinguished from graduates received by another. In the case of centrality in the hiring network, highly prestigious programs are chosen more often and at the same time choose other prestigious programs. Thus, a program with high in-degree prestige hires from high profile programs, whereas high out-degree prestige indicates a program places many of its graduates into quality programs.

Doctoral advising

Any analysis of placement of doctoral students into faculty positions would be incomplete without consideration of the advising process—the advisor mentors graduate students, often publishes with students and an advisor’s reputation is certainly a relevant factor for search committees when considering candidates for employment. Advising in doctoral education has not been investigated in relation to student placement. The lacuna of research in this area is noticeable when considering the importance of the advisor in graduate education (Barnes & Austin 2009; Zhao, Golde, & McCormick 2007). At its best, the advisor-student relationship represents the ideal mentor-protégé arrangement. However, the advising relationship can be uneven and there are numerous examples of bad advising. Recent research documents advising behaviors that affect doctoral student satisfaction (Schlosser & Gelso 2005; Zhao et al. 2007). Based upon the previous discussion, the following hypothesis is advanced for publication rates and position centrality in doctoral programs in Communication:

H1

There is a positive relationship between number of publications and position centrality in the hiring network.

It is predicted that students who study under more published advisors will be positioned in more prestigious academic jobs for at least three reasons. First, by virtue of an advisor’s publication numbers, a student is more likely to be graduating from a higher profile department if hypothesis one is supported. Second, students learn from modeling of behaviors and prolific authors likely demonstrate publishing drive and success. Third, an advisor is likely highly respected and known if s/he publishes in the field’s premier journals and this will advantage the graduating student in the job search. Hypothesis two is proposed:

H2

There is a positive relationship between number of advisor publications and position centrality in the hiring network.

Considering various factors related to faculty position, the following research question is advanced:

RQ2: Do advisor publications, individual publications, and centrality of doctoral program predict position centrality among the network of Ph.D. granting programs in Communication?

Both the scope and focus of the current analysis are unprecedented in Communication. An attempt is made to study the census of faculty members affiliated with the 98 doctoral-granting programs in Communication. Also care will be taken to document: (1) year of one’s doctoral degree, (2) information on one’s advisor, (3) publication counts from two databases, and (4) information related to one’s doctoral degree program.

Method

Establishing database of scholars

The population of interest is faculty members with earned doctorates employed in tenured or tenure-track positions in doctoral degree programs in Communication. The final sample of scholars was taken from a recent study by Barnett et al. (2010) and the reader is directed to this study for greater detail on establishing the database of scholars. Data were initially collected on 2,194 faculty members at 102 programs housed at 74 different institutions in the United States. Due to either incomplete information or indication that a faculty member did not have an earned doctorate, the final database includes information on 1,581 faculty members from 98 doctoral programs in 74 unique institutions. This represents an estimated 72% of the population of research faculty in the field. Analysis of program websites and emails to 1,488 individuals (71% response rate to email requests) were used to establish the data fields for subsequent analyses. An attempt was made to gather the following information for each faculty member: (1) year of degree, (2) degree program, (3) name of advisor, (4) rank, (5) number of publications for individual and advisor, and, (6) centrality of current position in hiring network. The sample size of each analysis varies by completeness of data for a given factor and degrees of freedom will be reported to reflect this variability.

Measures

Publications

Communication and Mass Media Complete (CMMC; www.ebscohost.com) and Communication Institute for Online Scholarship (CIOS; www.cios.org) were used to count publications by individual faculty members, advisors and programs. Publications for both databases were restricted to refereed journal articles. After establishing reliability in data collection and coding and inputting information on publications (e.g., common last names, journal articles versus books), both databases were searched for number of publications for all faculty members in March and April of 2009. CIOS includes coverage of 118 journals in Communication and CMMC is more expansive with coverage of 612 peer-reviewed journals. For regression analyses, a decision was made to use number of CIOS publications for individuals rather then number of CMMC publications. This decision was based on co-linearity between CMMC and CIOS (r = .80, n = 1578) and CIOS limits coverage to journals specific to the field of Communication. Thus CIOS publications potentially represent greater visibility of a scholar in the field of Communication.

Position Centrality

Eigenvector centrality was used to measure position centrality (see Barnett et al. 2010, for full description) in the Communication hiring network. It is an ideal measure for the hiring network as eigenvector centrality measures the strength between pairs of ties, rather than the presence or absence of a relationship. Thus, the total number of ties between programs is considered, rather than a dichotomous measure that examines if a program has or has not hired from another program. Also, the computation of eigenvector centrality considers indirect ties, as well as the strength of ties to more central programs. Eigenvector centrality (EC) determines how central a program is based upon its loading on the largest eigenvector of the network’s socio-matrix (Bonacich 1972). Barnett et al.’s (2010) findings indicate the measure correlates with Neuendorf et al.’s (2007) rankings from faculty members of 28 programs in Communication (r = .75, P < .001). The EC scores for all programs are reported in Barnett et al. (2010). Centrality scores using the eigenvector measure were computed for both current faculty program affiliation and for program where one earned his or her doctorate. Pedigree is used to label the centrality of one’s doctoral program in the hiring network. A significant number of individuals did not earn doctorates in Communication and thus pedigree scores were not computed for such individuals, with additional analyses based on a restricted range of data points.

Results

Analysis plan

Raw data are reported for descriptive statistics. For Pearson correlations and multiple regression analyses, a square root transformation was applied to CMMC and CIOS publication data in an attempt to normalize the distribution (Cohen & Cohen 1983). The square root transformation achieved its purpose in significantly reducing the skewness in both data distributions (e.g., CIOS skewness was reduced from 2.24 to .74).

Research questions 1a and 1b: normative publishing rates

The data on publishing numbers were positively skewed with zero publications representing the modal number of articles. For CMMC data, 19% of the sample had zero publications, whereas 29% of faculty members had zero publications in the CIOS database. Compare these estimates with Stephen and Geel’s (2007) findings, which indicated 39% of individuals had zero publications. The average faculty member affiliated with doctoral programs in Communication publishes 9 articles in CMMC (M = 9.27, SD = 14.52) and 6 articles in CIOS (M = 6.41, SD = 9.20). The standard deviations indicate a wide distribution in publishing rates and the range of articles published in CIOS was 57. Table 1 shows the number of publications by faculty rank—as one would expect, higher rank is related to a greater number of publications, with the largest increase shown between associate and full professors. Examining the data by degree cohort, it is clear the number of years since earning the doctorate is positively related to number of articles published. Not reported in Table 1 is the number of publications from professors with “other ranks” (e.g., Emeritus; M = 8.18, SD = 11.51, N = 99). Table 2 reports publishing data by degree cohort with direct comparison to Stephen and Geel’s (2007) publishing means in CIOS. These data support the contention that research faculty publish significantly more than scholars overall. For example, a research faculty member earning his or her doctorate in 1975 or earlier has published 12 times the number of articles than scholars overall. This difference is likely significantly higher if Stephen and Geel’s data were recalculated with research faculty members removed.Footnote 1

Table 1 Publications by faculty rank
Table 2 Publications in CIOS journals by degree cohort

Hypotheses 1 and 2: predicting position centrality

Hypothesis 1 predicted number of publications would be positively related to centrality in the hiring network. Table 3 presents Pearson correlations between pairs of study factors and data indicate statistically significant relationships. Results from CIOS (r = .24) and CMMC (r = .21) indicate support for hypothesis 1. The centrality of one’s doctoral program in the hiring network was also positively related to current position centrality (r = .22). Hypothesis 2 proposed a positive relationship between number of advisor publications and position centrality. The data indicate a small but statistically significant relationship between advisor publications and position centrality in the hiring network (r = .06).Footnote 2

Table 3 Zero-order correlations for key study factors

Research question 2: multivariate analysis of predictors

Table 4 presents results of four multiple regression analyses predicting network centrality. Overall, publications, advisor publications and centrality of doctoral degree together predict centrality, F (3, 997) = 32.78, R 2 = .09. Analysis of the standardized beta coefficients indicates that number of publications and doctoral degree centrality significantly predict centrality when controlling for all factors. Interestingly, degree centrality or pedigree fails to uniquely predict centrality for assistant professors, while the largest beta coefficient for pedigree is with associate professors (β = .23). Across all ranks, number of advisor publications failed to predict program centrality.

Table 4 Results of multivariate regression predicting position centrality

Supplemental analyses at program level

Supplemental analyses were undertaken to examine publishing rates in CIOS-indexed journals by doctoral program. It should be noted, all analyses were undertaken at the program-specific level, not at the institutional level, as many institutions host multiple doctoral programs (e.g., University of Texas-Austin, Michigan State University, Indiana University). Previous attempts ranked publishing at the institution-level and used only publication frequency among scholars in the Top 100 list (Hickson, Stacks, & Bodon 1999) as well as a restricted number of journals (e.g., 24 in Hickson et al. 1999).

Two measures were used to assess publication rates at the program level, the mean and the number of publishing stars. The mean simply calculates the average number of publications per faculty member. Due to the variability of publishing rates and the high percentage of faculty who have published zero articles, a publishing stars measure was developed. A publishing star is defined as an individual who has published one standard deviation above the mean number of articles for research faculty. A star in this analysis is the 11% of faculty members (N = 201) who have 16 or more articles in CIOS as of spring of 2009.

Tables 5 and 6 list the top departments by mean number of publications and the number (and percentage of faculty) of publishing stars. Also listed is each program’s eigenvector centrality rank (out of 97) as reported by Barnett et al. (2010). As evident in the table, Michigan State (MSU; Communication) and University of California-Santa Barbara (UCSB) top both lists in terms of CIOS publications. It is worth noting the average faculty member in Communication at MSU, UCSB, and West Virginia publishes one standard deviation above the average research faculty member in Communication. In addition, the two doctoral-granting programs at MSU rank in the top three in terms of publishing stars.

Table 5 Top programs by publishing rate
Table 6 Top programs by publishing stars

Discussion

This study indicates that two factors, pedigree and publishing, significantly predict the prestige of one’s position among research programs in Communication. Centrality was used as a proxy measure for prestige of academic job, as is common practice in the social sciences. When study findings were broken down by faculty rank, it appears that pedigree matters more for associate professors (β = .23) than it does for assistant professors (β = .08) or full professors (β = .13), while the unique effects of publishing remain consistent across ranks. The current data prohibit a clear explanation for this finding. It could be speculated that as a highly published scholar becomes senior at the professor rank, pedigree may matter less in promotion and/or mobility to a more prestigious program. With respect to the predictive role of pedigree at the assistant rank, it appears to influence position centrality significantly less than number of publications. The take home lesson may be that one seeking a faculty position at a prestigious Communication program should maintain an active and continuous presence in the field’s journals.

Perhaps the most unexpected finding is the insignificant relationship between number of advisor publications and position centrality in multiple regression equations. The bivariate relationship was small (r = .06, P < .05) and when controlling for pedigree and publications, the relationship all but disappears (β = −.01).

Four caveats should be considered when interpreting the current study findings. First, using number of articles published by an advisor may be of limited value and future research should consider reputation or impact of advisor in terms of citation counts or textbook authorship. Second, due to the numerous types of scholarly research outlets other than journal articles, such as books, films, videos, and computer software, which are typically produced by Communication faculty, it is likely that scholarly articles alone do not accurately capture the range of academic activity of faculty at research institutions. This may be especially true for the high percentage of faculty who received their degree outside of the field or the United States and may publish in alternative outlets. Third, it is likely there is a bleeding effect of university reputation on program reputation. For example, Barnett et al. (2010) findings indicate Stanford’s program is ranked fifth in the field in placing its graduates in doctoral-granting departments (n = 36) despite one of the smallest faculties in the country (n = 8). Fourth, 25% of faculty members who are research faculty in Communication have earned doctorates in fields other than Communication. As the field matures, this percentage is likely to decrease, but certainly one’s doctoral training may affect where she or he publishes.

Future research would go far to explain the near 91% of unexplained variance in faculty position centrality. Perhaps one’s area of expertise or one’s preferred methods of research may explain position in the hiring network. It may also be worthwhile to consider the co-authorship networks in relationship to position centrality. Does one publish alone or in small groups of authors? Does one continue to publish with his or her advisor or with departmental colleagues? Related is the possibility that certain high profile departments may create (or demand) a culture of productivity, particularly among junior faculty members. Perhaps social learning is taking place (Bandura 1977), with doctoral students observing faculty in addition to their advisor and determining that publishing in refereed journals is the ticket to a faculty position in a doctoral-granting degree program.

Another consideration is what area of research one studies and publishing trends may reflect popular or faddish hiring areas. For example, a content analysis of NCA job ads (as of 10/22/09) indicates 16% of jobs are in mass communication, 13% in organizational communication, 12% in new media, and 9% in rhetoric using our content codes. Thus, one studying interpersonal communication, health or small group communication may have fewer options for faculty positions.

The pattern of findings related to normative publishing rates among research faculty indicates two trends. First, almost 30% of the sample has never published an article in the field of Communication. As these individuals are tenure-track or tenured at research programs, it is unclear if they publish at all or publish in journals outside of Communication. Or, some scholars may author books or monographs in their area of study. Second, the data show great variance in publishing rates at both the faculty-member and program level. An individual seeking to earn a doctorate in Communication and a position at a research university would do well to be admitted to a high profile program whose faculty members publish routinely in the field’s academic journals. In many ways, these current data and data from Barnett et al. (2010) mirror findings in the other social sciences (e.g., Burris 2004; Fowler, Grofman, & Masuoka 2007; Masuoka, Grofman, & Feld 2007) that indicate the handful of programs at the top are entrenched with one another in terms of the hiring network and top programs often hire their own graduates over time.

Noticeably absent in the current study is any attempt to measure impact or quality of faculty publications. It is understood all publications are not created equally and many programs require faculty to focus their publishing in certain journals based upon impact factors. The journal impact factor is an imperfect measure (Pendlebury 2009), although data for Communication show consistent impact trends over time for journals indexed by ISI’s Journal Citation Reports (Feeley 2008).

One implication of the current data is use and interpretation of publishing data for tenure, promotion or hiring cases at research programs in the field of Communication. The question over how many publications a research professor should have will likely continue and the current study does little to quell this debate. However, it is now known how many publications in CIOS-indexed periodicals one does have by rank and by degree cohort. The current study uses refereed journals and does not include books, grants, and other common forms of scholarship (e.g., convention papers, conference proceedings). Responsible use of the current findings in faculty evaluation would also consider the substance or quality of one’s research, one’s service and teaching record (including advisement), and local, national and international impact of a scholar’s work.