Introduction

It has been an honor to serve as this associations’ president. While I have several critiques of the American secondary educational system that are the focus of my address, I look at the Southern Criminal Justice Association (SCJA) as an example of what is truly right with academia. This association is dedicated to strong research, collaboration, and its student mentorship. We empower students by offering constructive feedback and working together to yield quality research and an outstanding journal. My career is stronger because of the mentorship of experienced SCJA leaders like Dr. Mitch Miller and Dr. Marv Krohn. It was a special honor to directly follow Marv as the Southern Criminal Justice Association President. His outstanding academic record, exemplary teaching style, and unique perspective have led several in this association to model our efforts after his own. He also, unknowingly through his selection of thematic panels and his own address in 2018, laid a solid foundation and rationale for the address I was envisioning for this conference.

Marv aptly asked us to consider whether we, as individuals and a field, have made the positive impact on the criminal justice system that we desire. More simply, he asked: Are we making a difference? (Krohn, 2019). He recruited a bevy of respected and talented academics to detail their thoughts on subsets of our field’s progress. Though the works of Waldo, Blomberg, Lizotte, and others (Blomberg, 2019; Lizotte & Hendrix, 2019; Waldo & Myers, 2019) were quite nuanced, Krohn’s (2019) address provided a compelling “big picture.” He largely concluded that criminology and criminal justice research had made a difference, but that difference was limited. At the very least, we haven’t made as meaningful as a difference as possible. Perhaps more worrisome, Marv cautioned that making a difference was infrequently the central motivation for our ongoing work (Krohn, 2019). The thoughtful critique of that address changed my framing of this address from “what is wrong with academia?” or “structural barriers to efficiency in research” to “how we can address the problems within academia to facilitate a more meaningful impact?”

I broadly titled this conference “Improving the Efficiency and Quality of Criminology Research and Education” as I want to focus on the systematic practices that have slowed down our research progress or hindered social reform based on criminal justice research. It is imperative to identify structural, political, and organizational roadblocks that we may need to circumnavigate in order to be more efficient, more effective, and more motivated as criminologists. While having to focus on logistical nuance in university systems may be tedious, it is not without merit. Minor changes to the environment that facilitates our research may have a major impact on our productivity in the same way that minor changes to a child’s diet, social interactions, or discipline structure may influence their adolescent behavior.

“Academia Shrugging” Destroys our Motivation

Ayn Rand’s complex and pivotal work Atlas Shrugged has long been seen as an insightful commentary on philosophy and economic practices (Rand, 1957). Regardless of your views on objectivism, reason, self-interest, or capitalism, the work presents a perspective that assists us in understanding how seemingly humanitarian policies can serve to alienate individuals and diminish productivity. Of course, that’s quite the oversimplification for the numerous interwoven themes that comprise Rand’s work. Here’s the challenge I offer: Reread Atlas Shrugged and envision her words and characters as representative of academic roles, conflicts, and our so-called victories. I’ll avoid offering examples to not cloud your thoughts if you entertain this exercise. Afterwards, and perhaps even before this address, I expect that you’ll agree that the field of criminology hasn’t fulfilled its potential in making a difference partially because we create a system that weakens or redirects motivation.

One of the central comparisons I draw between Rand’s work and academia is that differential performance does not yield proportionally differential rewards (Christensen, Manley, & Laurence, 2011). We obviously perform different tasks and are differentially successful in our research endeavors, yet we are subjugated to a system that often fails to recognize or reward successful performance over that simply deemed adequate (Lewis, 2018). We endure policy and treatment from a hierarchy not unlike Rand’s State Science Institute that robs us of our motivation to excel (Bugeja, 1994). Our annual review processes and evaluations are often a sham. They allow groupthink to justify equal ratings and equal treatment for all. Better performance should be rewarded, but as a field we interpret heightened performance as the obligation of those individuals. How can such a system not encourage a strike of the mind like Galt’s? Many of you know this example hits close to home for me and that I’ve described certain environments as a microcosm of the dystopian society Rand describes.

Obviously, I haven’t sought out Galt’s Gulch or followed through with my jokes about trying to make it to the big leagues as an umpire. I largely identify with Henry Reardon who experiences a rough situation but continues to fight due to pride in his creations and a strong sense of duty (Rand, 1957). I expect that the people in this room do as well. Each study you complete is your own “Reardon Metal;” you want to be rewarded but you enjoy the process of creating something worthwhile. You know that there is little reward for success or extra work (unless you move), but your integrity doesn’t allow you to settle for the standard. Yes, we’re probably all Reardons here at this conference, but that doesn’t mean the field isn’t losing out on productivity with each Francisco or Galt that achieve less than their best or switch fields because they aren’t rewarded. Even Hank eventually succumbs (Rand, 1957). In order to motivate academics to produce strong research and great works that make a difference in society, the system of rewarding progress must be renewed. Higher education is facing near unprecedented budget challenges, but it is necessary to discover a way to champion success. We need to find a way to endorse equality but be accepting of unequal success.

Avoiding the potential equal outcome bias may take work. We have to be willing to fall short ourselves and critique others. We need to avoid the tendency to program towards equal outcomes by misappropriating resources. For example, there may be a tendency to assign committee and teaching work disproportionately to yield apparent equal outcomes in research yields; more research gets you more service. There may even be a hiring bias towards not the most productive candidates but the candidates that best mirror the productivity of the search committee. We have to also hold our expectations of authorship constant regardless of the overall productivity or status of colleagues. One author’s productivity doesn’t entitle a colleague to an easy ride along on a manuscript.

Redundancies, Bureaucracies, and Distractions

Since the turn of the century, university positions for administrators and managers have increased at an alarming rate. The number of individuals employed to manage universities is growing 50% faster than the number of those in teaching and research roles. Belkin and Thurm (2012) point out that the growth rate in management or administrative positions is twice that of the growth in student population. Labeled by critics as “administrative blight” or “administrative bloat,” this trend certainly affects university finances, passing increased costs to student consumers and limiting funds available for research and teaching (Zywicki & Koopman, 2017). Of course, the university may benefit from these hires via increased rankings or enrollment growth, but I was unable to identify a single study that indicated significant increases in administrative staffing yielded subsequent growth sufficient to revert employment numbers to the preceding faculty to administrator ratio. Countless new “deanlets” and “deanlings,” as Ginsberg (2011) labels the excessive roles in The Fall of the Faculty, don’t necessarily result in a better university experience.

Perhaps universities simply have to keep pace with growing governmental bureaucracies. A Vanderbilt University (2015) study of over a dozen schools estimated that large universities now spend four to 11 % of their budgets fulfilling federal regulatory compliance. That extrapolates to $27 billion per year across the country. An amenities arms race also contributes as campuses build specialty centers, lazy rivers, climbing walls, and anything with a fancy name that might lead to more student dollars or bigger donations. Each requires a new administrator.

I venture the truth lies closer to C. Northcote Parkinson’s assessment of bureaucracies in the British Navy. He noted that the admiralty nearly doubled between 1914 and 1928 and quadrupled prior to 1950 despite the number of ships, men, and officers being a mere fraction of what they were decades earlier (Parkinson & Lancaster, 1958). His work, both humorous and mathematical, summarizes how officials make work for one another rather than simply cooperating to complete tasks. They attempt to multiply subordinates, regardless of need, insolating themselves from rivals. Their group’s work fills the time and staffing available regardless of challenges. He concludes that these forces yield a 5–7% annual growth in the size of bureaucracy “irrespective of any variation in the amount of work (if any) to be done” (Parkinson & Lancaster, 1958, pg. 7).

To illustrate how Parkinson’s tenets may function in academia, I’ll draw from my graduate assistant experience (outside of criminology). I was working in a role that required tedious and redundant paperwork. It clearly was inefficient. I offered suggestions to my supervisor on how to eliminate duplicate paperwork and volunteered to format a database that would auto-populate the different documents after a initial data entry. I was quickly cut-off and told, “no, if we did that we wouldn’t need as big of a staff.” She discarded the suggestion in order to continue to justify staffing. I have no doubt other meaningless work was created to justify later growth and that promotions were “earned” due to meeting goals related to those meaningless tasks.

This process eventually leaves our universities with countless offices and programs that overlap more than they diverge. It seems redundant to have distinct specialty centers for students that take nighttime classes, those that take summer classes, non-traditional students, and left-handed students. There are so many advising and career centers that students at some universities likely need advice on where to get advice. So many groups and programs function to promote undergraduate research, students may need to complete a study to decide which one to approach first. Kidding aside, we must realize that these redundancies are inefficient, and they inevitably draw financial resources away from teaching and research.

More directly, we have allowed these excesses to draw our time from developing course materials and conducting research. Departments are forced to nominate representatives to needless centers and committees. We have to read papers, shuffle reports, and respond to emails that mean little to teaching or the university experience. Instructors are required to submit unnecessary paperwork. Recently, I received an email as my courses were selected as a part of a new program meant to identify students at risk in introductory coursework due to problematic attendance or poor early grades. Students that were identified as having missed class or poor grades would get an email providing that information. Of course, the university already had one system in place for a mid-semester grade reports of that style. Both seem unproductive— students already have access to their grades online and should know whether they have attended class. I was tempted to fill in each line with a warning that the student has only attended two lectures (which was accurate given the timing of the request), but I didn’t think the group overseeing the project would realize my commentary was intended as derision for reporting that was required before sufficient time had passed for an instructor to have any meaningful feedback on 120 different students.

Faculty Input Is Not Always a Good Thing

The redundancies in bureaucracies create work of negligible import for faculty, but it may be more concerning that we often ourselves create work that distracts from our research in the effort to maintain faculty ownership of departmental and college activities. As professors, we are well-trained researchers and instructors. Rarely do our credentials include business administration degrees or business experience, yet we fight to maintain our stake in every small administrative decision (Jones, 2011; Kaufman-Osborn, 2017).

At most universities, the faculty has a voice in selecting their leadership after hearing about their strategic plans and management styles. That faculty selects a leader and then often fails to adequately entrust he or she with tasks that someone in that position is capable of accomplishing unilaterally (Jones, 2011). In many instances, it becomes more work for that individual to present options to the group than making a decision. As scientists we are trained to make informed decisions, we take the time to thoughtfully consider any issue that we are tasked with offering an opinion. Doing anything less runs counter to the underlying tenets of our profession. The time to research options and offer debate necessarily hinders criminology research productivity. I would hate to see an assessment of how much time was wasted in preparation for faculty meetings reading and researching administrative issues that are no more than minutia. I propose we increase our focus on research by truly empowering those selected as administrators to make decisions of minor and moderate importance. Perhaps this change might even facilitate a need for fewer administrators.

Make no mistake, I believe faculty ownership of curricula and the education system are critical; however, faculty input can be reserved for major decisions with substantial consequences. A department administrator is capable of independently performing annual reviews, making decisions related to advising structure, hearing student complaints, planning department initiatives, and certainly selecting the wording of most documents. We need to loosen the grip; save our insight for major decisions like tenure, hiring, program modification, and limiting intrusions from bureaucratic redundancies. This is also likely to result in more, not less, efficiency for the administrator. For the minor tasks and minutia I reference, it is likely that the presentation of viewpoints, gathering of opinions, and hearing faculty concerns exhausts more effort than unilateral decision making.

I’m sure this topic triggers a memory of a small stake issue that demanded much of your time simply because you were expected to offer an opinion and debate the topic. Perhaps we should share some of those stories at tonight’s reception for a few laughs. I’ll tell you about the hours Georgia Southern’s faculty spent weighing “Criminology and Criminal Justice” versus “Criminal Justice and Criminology” as its moniker. While humorous in retrospect, these minor debates can have significant consequences and create a political mine field, particularly for untenured faculty. I initially avoided debates on issues that I believed did not require the chair to seek faculty input. I cast abstaining votes to indicate my opinion that the matter should have been decided unilaterally by a department chair. A more senior colleague pointed out that other faculty members held equal contempt for abstentions and votes against their own. She was right; I was accosted by a faculty member offended by my lack of endorsement for an issue they championed yet had little effect on our department. In our present system, junior faculty are forced to spend time weighing in on minutia to evidence departmental investment—a process that can both distract from our goals and create conflicts.

Committee work can be viewed similarly. Tasks can be assigned to capable individuals rather than capable committees. If we entrust a colleague with ownership of courses and chairing student work, we can surely allow them to unilaterally plan a department awards ceremony, oversee honors admissions, or accomplish any other similar task. Why do we exchange countless emails about centerpiece selection for a small awards gathering when one person could have planned the entire event in less time were they not expected to gather input from others on every detail? We could accomplish so much more in terms of research and teaching if we were not obligated offer opinions into issues that do not require them. Unfortunately, we’ll likely continue this process until a collective change is made. If the faculty generally desires to maintain that tight grip on each detailed decision at the committee, department, or college level, we remain forced to go offer insight to avoid negative performance reviews. We debate the minutia because we must.

Accreditation Is a Sham; Worse, It’s Wasted Time

I have no doubt that the early days of university, college, and program accreditation were driven by good intentions and a desire to ensure students received a quality education. A hundred years ago, a potential student needed reassurance that a program would serve their needs—school comparisons were likely limited in libraries and teens lacked access to the plethora of online information we have today. At some point, however, accreditation began to lose its meaning due to “accreditation mills” (Hallak & Poisson, 2007). It no longer informed. Even legitimate efforts at maintaining programmatic consistency have languished. Accreditation became more about meeting strict guidelines than delivering a quality educational program (Harvey, 2004; Winter et al., 2009). I argue it is time to push aside the accreditation bureaucracy and allow creativity back into education.

It is reasonable to ask if accreditation has any meaning to us; does it have some intrinsic value that unifies the field or empowers instructors? I tend to agree with Hubner (2017) who says “If we’re honest, we’d say that institutions seek accreditation because they’ll be punished if they don’t.” Perhaps some of that lack of value is tied to the dichotomy of outcomes. Accreditation fails to indicate whether an institution is good, or even great; it simply deems it adequate for approval. It should be safe to assume that schools don’t derive collective pride from adequacy. From a logistic standpoint, accreditation fees are generally paid by the institution and thus the accrediting agency has a financial interest in schools achieving, or at least being judged to achieve, that level of adequacy. This may devalue the accreditation further if we see it as school’s simply “getting what they pay for.”

In addition to the lack of true value, the process does little to improve or even assess academic rigor. It becomes an exercise in pushing paper, albeit a timely one. Accrediting agencies are actually motivated to add complexity to the process and create needless requirements; both justify additional funding and resources. Thus, countless faculty hours are dedicated to self-study of themselves, their program, and their university. Committees meet, gather information, and produce hundreds of pages of redundant documents to meet accreditation standards. They create strategic plans that “check boxes,” regardless of whether they are logistically or financially feasible. After all, much of accreditation is about whether the school has plans and how they meet assessment criteria as opposed to whether the previous visions were realized.

Using strict assessment criteria also creates the potential for political misuse; academia can be held to an accrediting agency’s agenda. Curricula can be dictated, and resources redirected, through alterations to accreditations. I recall an instance where the fear of losing program accreditation forced administration to allocate new hires for a program where students already outnumbered faculty (perhaps directing resources from a blossoming field like criminal justice with large enrollments). Strict protocols also restrict diversity in program format; ideally each of our departments should provide a distinct program that matches the needs of our students. This diversity is particularly important in graduate education where students benefit from programs that are strong in corrections, sentencing, juvenile justice, or some other subtopic that matches their interests and aspirations. More boxes to check leaves fewer credit hours for innovative coursework. If an accreditation group required all criminal justice programs to have coursework specializing in crimes committed by left-handed individuals, I assure you, “left-handed crimes” would replace a meaningful specialty course.

I applaud ACJS for initiating a moratorium on their accreditation program. It no longer matched the needs of the field; however, there remains a need to eliminate or overhaul the overall university accreditation processes. Each hour that they require us to pursue meaningless documentation is an hour not spent on meaningful teaching or research. As accreditation is currently tied to certain loan availability, a vetting of programs remains necessary, but a reformatted indicator tied to former student employment rates would be both a better tool and require less faculty time.

Having “Talking Points” and Making a Broad Difference Are Not Equivalent

My support for increased administrator autonomy may seem incongruent with criticisms of accreditation and management evaluation practices. I assure you it is not; administrators should be entrusted with decision-making authority. A concern is that position renewal, promotion to other positions, financial compensation, or hiring at a different university often appears tied to a small number of initiatives. These initiatives, labeled goals, effectively become talking points and a central focus, perhaps to the detriment of other dimensions of the position and institution. Maintaining positive growth in all aspects of the university mission should be the goal and the manner of evaluation, but the complexity of positions allows for a limited number of factors to dominate the narrative. We know space in letters is limited; we know recurring reports must be concise. As a result, administrators emphasize the handful of action items that will create talking points.

Fulfilling goals and creating narratives to describe our successes should be applauded, but it is still necessary to focus on the totality of results as opposed to a limited number of outcomes. While the emphasis on creating strong talking points may detract from other ventures, I also worry that the creation compelling talking points becomes the rationale behind goal creation. Do we participate in a system whereby new administrations assess and respond to the needs of the institution or do we foster the creation of goals that first serve the purpose of creating interesting talking points for career advancement? The latter might certainly explain some of the aforementioned redundancies in our universities with each subsequent administrator establishing new centers, initiatives, or practices. After all, describing a new program yields a more compelling narrative than improvements to existing practices. Strengthening our institutional fortes may take a backseat to improvements in pursuits more tangential to our mission. Clearly, it is easier make notable advances in our areas of weakness than those of strength.

Such a system sets up a revolving process whereby administrators focus on select talking points and neglect other areas; successors then address those formerly neglected areas in order to create their own persuasive talking points while ignoring those of their predecessors. This is akin just squeezing a balloon in different locations to move the air around—we haven’t grown our impact and the balloon hasn’t grown; we have simply emphasized something differently. Faculty members are distracted from their research and teaching efforts to assist with these new directives, soon to be discarded and replaced by those emphasized under subsequent management teams. We currently endorse the perpetuation a system which creates fleeting successes and recurring failures; however, if we begin to assess administrator success on broader outcomes then we eliminate the motivation for this practice, redirecting our time and that of administration to efforts that are likely to make a long-term difference in the quality of our institutions.

“Research Agenda” Should Be Eliminated from our Lexicon

I have long lamented our field’s use of the term research agenda. Meant to be synonymous with research area, it carries an implication that scholars have crafted ideological goals for future study outcomes as opposed to allowing data to provide answers to complex questions. True, “agenda” also has the innocuous meaning of plan or timetable, but that is not how we use it. Scholars discuss their future plans with phrases like “I want to demonstrate” rather than “I want to explore” or “I want to know if.” Our field cannot be viewed as a legitimate science if a set outcome is favored over the other. We have to start with curiosity, apply the scientific method, allow data to drive our interpretations, and publish those results.

Let’s start with the least controversial example of how we favor some research findings over another. Significant results get published. We become zealots in “Sir Ronald Fisher’s International Church of p<.05” and will generally do anything to demonstrate our faith. Krohn (2019) aptly pointed to our tweaking of methodology in order “to allow the important variables to do their thing in an unfettered way.” If we are rewarded, via publications and the associated prestige, nearly exclusively for significant findings it is no wonder that we find ways to “uncover” those significant relationships whether through methodological tweaking, atheoretical data mining, or flat-out data falsification. Without substantial evidence to the contrary, I am inclined to believe the latter is limited in criminology; still, tweaking and atheoretical digging for p > .05 are inconsistent with the scientific method and jeopardize the integrity of our field. We must create hypotheses, design methodology, test hypotheses using those criteria, and publish results regardless of findings. If less is accepted, the expanse of criminological research is less legitimate—a research field viewed with questionable legitimacy is unlikely to influence policy or to make a difference.

My advice to graduate students hasn’t been that ideological. I generally warn them to make sure that their envisioned studies will be interesting and publishable regardless of whether results are positive, negative, or non-significant. I counsel them to determine if there is a way to pitch each outcome as interesting. If not, perhaps only pursue that question if the analysis is straightforward and the time commitment minimal. I recently discouraged a student from pursuing a thesis topic because I anticipated non-significant results were the most likely outcome of a lengthy process and unlikely to be published. The goal of a thesis is to develop familiarity with the scientific research and create knowledge—the direction of findings should not matter! Of course, since I’ve had this epiphany, that student has become excited about an alternate topic that meets my directive of making sure each possible result would be publishable.

Several steps are necessary to create a literature base representative of findings from the studies we initially envision. Criminologists must resist the urge to tweak methodology or mine data absent informed hypotheses. Non-significant findings should be given the similar weight as significant ones—don’t we learn about behavior both from relationships that exist and those that appear not to? Those non-significant findings warrant documentation rather than being discarded. Peer-reviewers should consider the absence of an effect meaningful. Perhaps we truly need the Criminological Journal of Non-Significant Findings that Dr. John Boman has joked about.Footnote 1 An alternative solution could be adapted from the medical field; the International Committee of Medical Journal Editors (ICMJE, 2019) requires registration of clinical trials before data collection in order to prevent selective publication and selective reporting of research outcomes. Non-significant findings might be inferred from the registered studies that don’t result in journal publication.

I generally endorse the Registered Report model initiated by Cortex six years ago (Chambers, 2013). It is becoming more pervasive in medicine and psychology but has yet to infiltrate criminology. Under this model, works are reviewed and conditionally accepted prior to data collection or data analysis. Reviewers assess the introduction, literature review, and detailed methodology to determine if the study would be worthy of publication if carried to fruition. Works granted an in-principle acceptance in this way are only later assessed by peer-reviewers for whether they followed study protocols and the completion of necessary ad-hoc analyses (Chambers, 2013). This method may help to prevent selective reporting and remedy the p < .05 bias if incorporated into criminology. Admittedly, our field largely relies on secondary data analysis and this methodology is rendered ineffective if academics scheme the system by submitting registered reports when they already know the findings to be non-significant and unlikely to be published elsewhere.

The agenda of criminological researchers is also likely informed by political ideology. Academia and criminology are overwhelmingly liberal and if those political leanings motivate research agendas, summaries of research in the field would clearly suffer from bias (Inbar & Lammers, 2012; Yancey, 2018). Even if individual pieces of research are all accurate and free of statistical tweaking, the net picture of criminology research could have a slant due to the selection of studies and methodology likely to substantiate the preexisting assumptions of authors. Potential studies that match the liberal lean are potentially more likely to receive funding as fellow researchers are tasked with evaluating funding proposals. Authors are similarly aware that those peer-reviewing their works are overwhelmingly liberal and likely to reward them, whether consciously or not, for works that align with their policy agenda. How might we evaluate journal submission that reports that the death penalty would have a deterrent effect if extended it to crimes other than homicide? It certainly would be a tough sell. If it found the reverse, would it experience the same degree of scrutiny in the review process? I would venture we apply more stringent criteria to findings supportive of conservative policy.

There is little denying political bias in the social sciences (Inbar & Lammers, 2012). States like Colorado have even found the need to include political affiliation in their university system’s nondiscrimination policy. At my own university, former administrators have used the derogatory “ ‘Pub Friends”Footnote 2 to refer to conservatives and openly disparaged conservative voters at events specifically focused on inclusion. Rothman and Lichter’s (2008) description of a glass ceiling for conservative academics is a bit sensational, but their empirical assessment of discrimination and divergent outcomes for equivalent records is compelling. If academics are pushed down and out of the field due to political ideology, it becomes unreasonable to see the field as legitimate and worthy of making a difference in terms of policy. It is no wonder that conservatives view our work skeptically (deBoer, 2017).

I am particularly bothered by the effect of ideological bias on our students. Binder and Wood (2014) found that conservative students tend to hide their viewpoints from professors in an attempt to earn higher grades. Woessner reports that controlling for SAT scores, high school grades, and the like, conservative students end up with lower grades than liberal peers (Woessner, Maranto, & Thompson, 2019). The ideological bias on campus is so inescapable that students are likely judged if their instructors see them patronizing the Chick-fil-a. I’m sure that faculty members are also socially and politically penalized for that behavior, but sometimes I decide a chicken sandwich is worth it. If it wasn’t readily apparent from the title and structure of this address as well as the harm reduction approach championed in my drug policy works, I identify as Libertarian and escape most ideological discrimination— as in the words of a study summarizing professors’ views on non-democrat peers: “Libertarians are tolerated” (Woessner et al., 2019).

For criminology research to matter, it cannot be biased. For it to be unbiased, the field must accept, retain, and promote a diverse array of faculty members. We must become cognizant of our implicit biases and hire the best candidates. Yancey’s (2018) work demonstrates conservative identification, NRA membership, and the like lesson the likelihood that equally qualified candidates obtain an interview or be offered a position. Numerous training hours are spent addressing potential biases to ensure demographics don’t affect hiring practices. It seems time that political biases are added to those workshops to encourage the expansion of ideological diversity among criminological faculty. Further, instructors must embrace political diversity in social science classes so that potential conservative criminology scholars aren’t pushed to fields they find more welcoming and accessible. Our classes need to be without agenda in the same way that we need to remove agenda from the description of our research. We have to stop trampling the independence and intellectual curiosity of our students as we race to the left; only then can we dissuade students from the notion that there is no one more closed-minded than a liberal professor and eventually enjoy classrooms filled with receptive students openly discussing ideas without fear of reprisal.

More isn’t Always Better in Terms of Research

Academic programs in criminology and criminal justice have increased substantially in recent decades with the production of research growing at an even faster pace. Simply put, we are each producing more studies than our predecessors twenty years ago. An argument can be made that data management tools and statistical software play the greatest role in our productivity, but Parkinson’s Law might also apply to publication opportunities: the field produces enough research to fill the journal space available. When more journals exist, a higher number of lower quality works are created to fill the space. The incentive for great meaningful pieces is diminished when any piece will meet at least one journal’s publication threshold. Our field has great scholars, many in this room, that continue to contribute insightful, creative, and methodologically rigorous studies, but those works can disappear in a sea of mediocrity that includes piecemeal studies, extreme niche topics, and meaningless analyses. A handful of criminal justice professionals are likely able to sift through this haystack to find the metaphorical needles that should be affecting policy, but we generally lose the ability to reach outside the discipline due to information oversaturation.

Much of the blight in this area can be attributed to pseudo-journals or predatory journals that advertise peer-review but set tediously low standards in order to attract authors and collect publication fees. These new journals rarely fill a needed niche; they simply exist to collect fees. While we generally avoid them, they dilute the attention of readers and, since they may not face legitimate scrutiny before publication, can reach inaccurate conclusions. Meaningful research is lost in the weeds of pseudo-journals. I will admit that colleagues and I once published in what I would label a pseudo-journal; since we found the name entertaining, we provided fifteen pages, but not a dime, to publish a study in Beverages(Stogner, Baldwin, Brown, & Chick, 2015).

The type of academics that fill this room likely avoid those journals (entertainment purposes notwithstanding), but we can take action—curtail their growth though ceasing to recognize pseudo-journals in the annual review and tenure processes. Challenge our colleagues to ignore them so that they wither, die, and leave a more streamlined, high-quality literature base. Then our meaningful research will more easily reach practice.

We must also reject the impulse to publish our own studies piecemeal. Publishing slight twists of the same topic in numerous outlets does less to convince readers than a single high-quality work. The findings of complete works are more compelling to readers outside of the field and thus more likely to make a difference. Good research makes a difference whereas excessive research only creates chaos.

We should applaud the American Journal of Criminal Justice in this regard. While the number of studies it accepts has admittedly increased in recent years, quality has not waivered. Our growing metrics and rankings would indicate the reverse. Under the leadership of recent editors, the works appearing in AJCJ have become more complete, more complex, and more meaningful. Quality journals with rigorous peer-review publishing a reasonable number of articles can present a clearer picture about what we know and better inform policy than a near infinite number of “pay to play” journals.

Student Evaluations of Teaching Effectiveness Foster Ineffective Teaching

My next point is focused on our production of quality of our teaching rather than efficiency of research practices. I have no doubt that you as members of SCJA entered academia with an interest in conveying knowledge to others. Perhaps you place an emphasis in your teaching practices on fostering curiosity or developing problem solving skills. Regardless, our emphasis should center around creating courses that assist our students in growing the most intellectually—not just learning something, but rather learning as much and developing as much as possible. I don’t believe our current framework for evaluation of teaching practices facilitates that goal.

When designing course materials, faculty members must weigh two divergent interests: the desire to educate students in a way that fosters curiosity, the development of critical thinking skills, and the retention of substantive material, and their personal investment in student evaluations of teaching that may be used for review and promotion purposes. On the surface, these goals seem congruent—those that prepare their students well for the workforce should receive the best ratings at the end of each semester. Yet, this is indisputably not the case (Braga, Paccagnella, & Pellizzari, 2011; Carrell & West, 2008). Often pedagogies that lead to the most measurable learning successes hinder our evaluation scores because they are more challenging and time-consuming for the student. Alternatively, those that students find more appealing may not always lead to equal or better outcomes (Stogner & Al-Zoubi, 2020).

I have had the privilege of serving as a Department Diversity Liaison and on the College’s Diversity Board for several years. That group was the responsible for the creation of a document that summarized research related to student evaluations of teaching in order to inform annual review and tenure decisions. While the initial draft (CLAS, 2012) was created prior to my joining the committee, our consideration of revisions led me to explore numerous works that empirically demonstrated the negative toll overuse of these student evaluations may have on minority faculty members. Prior to reading the research on the topic, I was dissatisfied with the document’s vagueness. Surely there must be a correction factor that can be added to account for discrimination. There must be something we can do to use that system until the underlying problem is addressed. After delving into the literature, I found no mathematical adjustment, but rather realized the entire system of using student evaluations of teaching is both pointless and problematic. It likely does more to harm the quality of education as help it.

Student evaluations of teaching are biased by more than racial and gender discrimination. Studies suggest, among other things, that younger students give on average lower scores, that the hard sciences and math score lower, and that faculty benefit from teaching elective as opposed to required courses (Spooren, Brockx, & Mortelmans, 2013; Ting, 2000). This understanding may push talented faculty away from the areas where they can do the most good (such as larger required courses) to niche elective classes where they encounter the subset of students most likely to rate them highly. Studies indicate that professors are rewarded when the grade leniently, inflate grades, follow the book, and simply “teach the test” (Crumbley, Flinn, & Reichelt, 2010; Griffin, 2004; Remedios & Lieberman, 2008).

An often-discussed study authored by Ambady and Rosenthal (1993) depicts that interpretations of attractiveness derived from a 30 second video without sound have a .32 correlation with student evaluation scores. Riniolo, Johnson, Sherman, and Misso (2006) demonstrate that instructors deemed attractive score about eight-tenths of a point higher than those not earning the elusive “chili pepper” from Rate My Professor. If our looks matter in terms of the scores we receive, then those scores shouldn’t matter to us. Anecdotally, I looked back at my teaching scores. I saw a large jump around the time I lost a good bit of weight; perhaps I should credit Crossfit Vitality with my higher scores rather than alterations to my courses. In the future, my wife, who worked my poor fashion sense into her vows this spring, could reasonably take credit for evaluation score improvements by suggesting my increasingly stylish wardrobe is responsible.

Joking aside, the focus on student evaluations of teaching has a problematic impact on course design and content delivery. They lead to an environment less conducive to student learning. Take the example of what I’ve called the PowerPoint puzzle (Stogner & Al-zoubi, 2020): faculty members often believe that constant use of slideshows during the course sessions impedes creativity, stifles discussion, and facilitates memorization rather than understanding (Hill, Umland, Litke, & Kapitula, 2012); however, they also understand that research demonstrates that using PowerPoint slides as the basis for lectures is positively related to student evaluations of teaching (Gabriel & Gabriel, 2008; Hill et al., 2012). For better or worse, the ease of PowerPoint, and perhaps its relationship with teaching evaluations, has led to its near ubiquity in criminology classrooms. We must be concerned with distancing ourselves from the practices that facilitate our own advancement, and thus we are stuck with PowerPoint.

We similarly encounter a conundrum when we decide whether to provide the students digital copies of those aforementioned slides. There are far more empirically supported pedagogical reasons for withholding slides from students than the alternative. Instructors providing digital copies of lecture slides in advance see meaningful drops in attendance which is generally one of the top predictors of success. Those provided slides free students to let their minds wander during lecture and reduce the perceived importance of maintaining focus. More importantly, studies often indicate that classes provided with digital copies of slides perform worse than those that are not (Grabe, Christopherson, & Douglas, 2005; Worthington & Levasseur, 2015).

We each must also decide how much rigor to design into our coursework. I firmly believe students can rise to meet reasonable challenges and that instructors are doing them a disservice by grading leniently, simplifying projects, or limiting the depth of study. If it is our job to prepare the next generation of academics, criminal justice professionals, and other leaders, why would we consider developing courses that are not rigorous and challenging? A simple explanation is that we are actually rewarded for doing the opposite. Faculty members are rated on teaching largely through student assessments and thus student assessments affect our annual ratings and subsequent raises. Courses lower in rigor yield higher evaluation scores (Olivares, 2001; Remedios, Lieberman, & Benton, 2000; Spooren et al., 2013).

As Braga et al. (2011) argue, “students are myopic and evaluate better teachers from which they derive higher utility in a static framework.” Braga’s team was able to obtain data from students randomly assigned to different teachers of the same course. Students completing the course with teachers with lower teaching scores actually performed better on follow-up courses than those who had a higher scoring teacher. Students rate instructors lower when they learn more. In Braga’s words “teachers who are associated with better subsequent performance receive worst evaluations from their students.” Carrell and West (2008) similarly indicate that student evaluations fail to act as a positive predictor of future performance if students are randomly assigned to courses. Self-selection of students into different instructors’ courses generally prevents assessments like these, but the findings are remarkable.

We must decide if our goal is to maximize our own scores or the future potential of our students. This is not unlike the issue Marv raised in his address regarding whether to pursue the most publishable or citable study over the one that has the greatest potential to make a difference. As clearly as it seems in the proceeding case that we should focus on tangible impacts of research, we must focus on courses that will result in the most educated students-- not students that are simply content enough to provide high scores. Yet as long as we continue to rely on student evaluations of teaching, we will return to that ever-present puzzle of choosing between teaching in a manner that best facilitates learning or the way that maximizes student evaluations of teaching.

In maintaining rigor in my courses and challenging students, I feel I have maintained my own and the university’s integrity. I’m willing to accept scores other than 5’s to do so. But what if maintaining rigor begins to consistently yield scores lower than a 4 out of 5? Would I hold fast to my convictions or succumb to what Spooren (Spooren et al., 2013) calls “the tyranny of the evaluation form?” I’d hope it would be the former, but at some point, self-preservation would require the latter. The solution is likely to eliminate the system entirely and replace it with another-- judge instructors of introductory courses based on how well those students do in subsequent courses, evaluate teachers directly through administrator observation, or use job placement and performance indicators from alumni studies as a tool. Surely the time and money spent on collecting, filing, and interpreting student evaluations of teaching can be redirected to an alternative assessment structure that measure learning outcomes rather than grading leniency, instructor likeability, and the ever-important “chili pepper.”

This Is John Stogner Speaking

It seems only apt to conclude this address by paraphrasing the compelling 7th chapter of its namesake: Galt revealing himself over the radio and resolving mysteries associated with the strike. While my message to this group may not have the reach of Galt’s fictional uninterrupted three-hour broadcast to the country, I hope my words inspire some within this room to address the impediments to efficiency and effectiveness within academia. Those efforts will facilitate a scholarly criminal justice and criminology community better able to make the difference Marv demanded one year ago (Krohn, 2019).

For the last hour, you have been asking: Who is John Stogner? This is John Stogner speaking. I am the man who loves his field. I am the man who does not sacrifice the integrity of his teaching or the scientific method in his research. You have heard me claim today that this is the age of academia’s crisis. You have likely even said it yourself, half in fear, half in hope that the words had no meaning. You have cried that bureaucracy’s sins are destroying the field; that criminology isn’t making a difference. Scholars of intelligence, frustrated by academia, strike, in protest and despair, not knowing the meaning of their action other than to spend their years in the safe obscurity of redundant tasks, needless committees, and piecemeal research, functioning at a fraction of their capacity. All these disasters that have wrecked your world, all the hassles you have ever endured, came from your own attempt to evade the battle.

The enemies we seek to defeat are bureaucracy, inefficiency, and apathy: they permit us no productivity nor do they honor merit. We are chained and commanded to produce with our hands tied and our eyes distracted with meaningless tasks by a conspiracy, an evil, that is without leader or direction. Whoever is now within reach of my voice, I am speaking at the deathbed of our field, at the brink of that darkness in which we’re drowning, and if there still remains the power to struggle to hold on to those fading sparks which had been our drive to make a true difference—we must use it now. There comes a point, when our own consent is needed for evil to win—thus, we begin our battle pronouncing a single word. The word is ‘No.’ ‘No’ to the tyranny of the evaluation form; ‘no’ to administrative bloat; ‘no’ to meaningless accreditations; ‘no’ to political bias; and ‘no’ to valuing research quantity over quality (portions adapted from Rand,1957).