Keywords

Introduction

This chapter provides an analysis of institutional research and evaluation in higher education drawing on the principles of QuantCrit (Gillborn et al. 2018). It begins with an overview of this approach and the specific application to higher education (HE) before dissecting various normalised data collection approaches found in the contemporary university. The assumption made by the author is that through a lack of critique of racial biases – in decision making (cognition) methodology and ethics – higher education institutional research and evaluation is at risk of supporting White supremacy in the academe i.e. the structural processes which allow White people to claim and sustain positions of privilege at the expense of People of Colour.Footnote 1 Mirroring the sentiments of Cross (2018) this chapter suggests that all those involved in institutional research and evaluation “should engage in critical self-reflection to avoid perpetuating racist narratives through data” (Cross 2018, p. 268). This includes the critique, not rejection, of quantitative methodologies and the exploration of qualitative methodologies which seek to amplify counter storytelling as an anti-racist approach.

The author will use a typology of institutional research and evaluation (Austen 2018) to discuss the normalised data collection approaches within the institution. Digital storytelling will then be used as one alternative example which can be used by institutional researchers and case studies will be included. The focus of this chapter is primarily on student stories. However, the unknown voices of staff will also be acknowledged and discussed using reflections from research conducted by the author.

QuantCrit and Other Biases in Higher Education

Researchers and evaluators regularly consider bias in their work, using concepts of validity and reliability, most recently coupled with trustworthiness and authenticity (Guba and Lincoln 1985, 1989). Too often, bias is deliberated at the end of a project and cemented within a discussion of limitations and considerations for future work. The challenge set by this chapter is to prioritise the considerations of bias during design (to scope methodologies which can address bias), ethical review (to position ethics as political in addition to methodological) and at the point of data collection and analysis. This challenge is delivered to researchers and evaluators who are working for institutions—either in defined job roles, or merging inquiry into existing teaching, research, or administrative roles—and who may be unaware or uncritical of the racial biases inherent in their work.

QuantCrit and Higher Education

Gillborn et al. (2018) describe QuantCrit as the application of critical race theory (CRT) to the collection and analysis of statistical data. Both concepts have been discussed at length in this book. To briefly reaffirm; the principles of QuantCrit are based on the notion that numbers are not neutral or any more factual than any other form of data (Gillborn et al. 2018). Historically, positivism and quantification have been dominant within data hierarchies. Reference to neutrality moves this debate beyond epistemological paradigms. Crawford (2019, p. 424) states, “racism is deeply entrenched in the fabric of a nation’s institutions, and by association, within its official reports, statistics and dominant truth claims”. Furthermore, as “all data is manufactured and all analysis is driven by human decisions” (Gillborn et al. 2018 p. 167) racial bias manifests at the point of quantitative data collection and during analysis. This can provide an opportunity for the promotion of “white supremacist ideologies” (Cross 2018, p. 268), where Whiteness is a racial position and not a biological fact (Trimboli 2018).

The implication of the dominance of inherently biased research and statistics, Gillborn et al. argue, is that numbers shape inequality. If these implications were discussed and acknowledged at the outset of research and evaluation in higher education, this work could go beyond simply documenting inequalities (Cross 2018), and move to eradicate them.

Applying Biases to Discussions About Race

To extend Gillborn et al.’s discussion, examine the statistical and cognitive biases which apply to the use of data in research and evaluation. The application of a race lens highlights the risk of compound bias during design, review, data collection, and analysis. The most well-known and applied is implicit bias (often referred to as unconscious bias). Although thriving within the race equality discourse in higher education, implicit bias training has been shown to be ineffective (Atewologun et al. 2018), specifically in work to narrow the degree awarding gap. But consider less explicit layers of bias. Confirmation bias uses data to confirm an existing hypothesis or belief system without the consideration of alternatives. Could the damaging proliferation of deficit assumptions about Students of Colour (Smit 2012) in higher education be sustained by only selecting data which confirms these assumptions? The McNamara Fallacy is closely aligned to QuantCrit and warns of the overreliance on metrics. Consider the Pro Vice Chancellor Student Experience who uncritically exclaims, “we only need one more black student to get a first class degree to eradicate the gap!” Publication and reporting bias refer to the likelihood of publication and reporting of research findings and higher education is known to over report success and under report failure (Dawson and Dawson 2018). In light of Equality Legislation and the regulatory gaze over degree awarding gaps, how likely is the overt publication of inaction or failure? Finally, the representativeness heuristic describes the use of “short cut” categories (similarities) to explain a situation and the availability heuristic describes the ease at which an idea is brought to mind. As a result of the dominance of Whiteness within higher education (Bhopal 2018) and without the elevation of counter stories from Students and Staff of Colour, racial bias can and will perpetuate.

This chapter will foreground digital storytelling as one qualitative method which can be used to elevate counter stories. This methodology has been used by the author as part of institutional research across the higher education sector. Before exploring the detail, the scope of institutional research and evaluation will be outlined drawing on the QuantCrit perspective.

How Do You Know Your Students? Exploring Existing Institutional Research and Evaluation (IRE)

Institutional research refers to “a broad set of activities that collect, transform, analyse, and use data to generate evidence to support institutional planning, policy formation, quality enhancement, and decision making” (Woodfield 2015, p. 88). Institutional researchers (including the author of this chapter) are often differentiated from researchers of education, by their focus on research BY the institution, FOR the institution. They specifically evidence decision making in policy and practice, often led by the external demands and regulation of the HE market. Institutional research and evaluationFootnote 2 (IRE) can be appropriately categorised as “insider research” (Atkins and Wallace 2012) and the associated reflective challenges apply in this context. The scope of student focused IRE is sizeable and includes, as examples: the generation and analysis of student data; sector benchmarked metrics from institutional surveys; self-report evaluations of teaching practice; process and impact evaluations of funded interventions; students researching students for academic credit; and scholarly research of staff for personal and professional development. IRE is most often limited to studies within one higher education provider (HEP) and regularly occurs without external research funding.

These methodologies are employed to “know” students, led by institutional researchers/evaluators who asked to explore a current strategic imperative by including student voices. In some institutions, this research is being conducted by very experienced researchers and evaluators who are employed for this purpose. Nonetheless, evidence shows that the inclusion of cognitive biases in research training and development “are not always given the attention they deserve” and “may be viewed as falling outside of the domain of what is typically considered ‘research methods’” (Stapleton 2019, pp. 579–580). Moreover, the scope of IRE is creeping into a wider variety of institutional roles as the demand for evidence of impact deepens. Capacity building which relates anti-racist research methodologies and the scrutiny of cognitive biases in a race context is clearly needed to ensure that IRE work is not contributing to racial inequality.

IRE and Ethics

In addition to the consideration of bias, this area of work must consider the level of ethical scrutiny applied to IRE to fully explore the risk of supporting White supremacy in the academe. In other contexts, a Code of Conduct for Institutional Research exists (see Association of Institutional Research – US, and Australasian Association for Institutional Research). British Educational Research Association (BERA) guidelines (2018) are the most applicable to IRE work in the UK, but there is a varied application of these guidelines in practice. The reasons for this are multi-faceted and include the aims/objectives of the research, the experience of the researcher and the alignment with disciplinary ethical approval processes. Notably, not all IRE in HE is carried out by those with methodological expertise; IRE can be conducted by academics and professional services staff with varied research experiences.

Not all IRE projects which should seek formal ethical approval via an Ethics Committee will do so. However, a lack of formal ethical approval does not mean that the work is assumed to be unethical. Rather, questions need to asked about the governance and monitoring of this type of work when there is little common ground connecting the foundations of inquiry. It is therefore vital for ensuring that all IRE conducted within HEPs is ethically sound, recognises bias, and specifically limits the risk of harm for participants, particularly students. All ethical decision making including: consent; transparency; right to withdraw; incentives; harm arising from participation in research; privacy and data storage; and disclosure relating to student participants, can have a disproportionate impact which is why definitions of vulnerability specifically consider the impact on marginalised groups.

Whilst contextual and relational considerations of marginalisation may be considered by scrutinising a proposed sample, it is unclear whether ethical approval mechanisms are considering the principles of QuantCrit when reviews are conducted. Research ethics is not discussed within the foundational papers of this concept (Gillborn et al. 2018), although QuantCrit advocates López et al. (2018) make explicit statements about their ethical principles and challenges to researchers:

We believe it is our ethical responsibility not to contribute to statistical analysis projects that regardless of intent erase or trivialize the lives of marginalized individuals and communities. (p. 190)

Are you collecting rigorous, reliable, and value-added race, gender, class, LGBTQ, disability and other data that are informed by critical race theory and intersectional knowledge projects for social justice? (p. 202)

Huber (2009) furthers this point by challenging notions of authenticity (which may be a marker of a robust methodology) as a Eurocentric perspective which legitimises the contestation of truth. Although CRT suggests, “all scholarship is political” (López et al. 2018, p. 182), there is no predefined acknowledgement of this during ethical review. As IRE may not even be seen as scholarship, (something which the author disputes) this may be an area for further exploration.

To further argue this point, there is evidence that not all those working as institutional researchers/evaluators (and ethical reviewers) have a grasp of the Equality Act 2010 and the boundaries of positive action. Stevenson et al. (2019) found that targeted interventions were most commonly used in access and outreach activity. This is one institutional area where research and evaluation will be transparent, expectations for robust methods are high, and job roles will be assigned. Stevenson et al. (2019) also found that very few targeted interventions were being implemented to enhance retention, success, and progression. Institutional obstacles to designing and implementing targeted interventions included the belief that targeting and/or positive action is illegal, which reinforces Mountford-Zimdars et al. (2015) findings of support for universal and indirect approaches. Stevenson et al. (2019, p. 46) also found that that there was a lack of ethical guidance supporting these processes. Furthermore, these unchallenged practices were reinforcing racist biases, as one of their stakeholder responses noted:

White people are in charge of designing research and interventions about attainment gaps and employability issues. Invariably, this leads to students of colour being labelled as deficient or difficult – they are objectified as research studies.

There is a need to develop and expand the skills, knowledge and criticality (to confront bias) of those who are tasked with undertaking institutional research and evaluation. The scope and scale of this work is outlined below.

IRE and Knowledge Apartheid

Based on the suggestion that the methodological decision making is subject to epistemological racism which furthers “knowledge apartheid” (Delgado Bernal and Villalpando 2002, in Huber 2009), the IRE approaches which claim to listen to the voices of students will now be discussed. Austen (2020) categorises IRE work as follows (Fig. 14.1):

Fig. 14.1
A diagram has the various approaches to student voices, like student learning analytics, surveys and evaluations, reflections and pilot studies, evaluations of impacts, and student and staff research. The methodologies are listed alongside.

Categories of Institutional Research and Evaluation (IRE)

Student Learning Analytics

Learning Analytics, Student Surveys and Student Evaluations are dominated by quantitative data and currently receive the lowest level of ethical and methodological scrutiny favouring the General Data Protection Regulation (2018) as a regulator of good practice. The aim of this measurement is to ensure standards and compliance. Gillborn et al. (2018) reflect specifically on the collection and analysis of attainment data, which is included as one analytical “big data” measure of student learning. This data consistently shows an awarding gap between UK domiciled BAME and White students (nationally in 2019, 80.9% of White students received a first/2:1 compared with 67.7% of BAME students, representing a gap of 13.2 percentage points, AdvanceHE 2019a, b), leaving the authors to remark what is happening in British higher education when the ethnic group that is least likely to go to university nevertheless enjoys the best chance of achieving the top grade. Were this a minoritised group there might be headlines about ‘scandals’ and shocks but, since the group in question is White, their high attainment fits with the basic expectations of a White supremacist media and polity and so the pattern goes entirely unremarked (p. 165).

Whilst this is evidence of data collection, reporting, and publication bias, there are more worrying practices emerging from the exploration of learning analytics and big data algorithms. Williams et al. (2018) discuss the discriminatory practices of algorithms which piece together small data into large data sets to harness monitoring (at best) and predictive features (at worst) by remarking,

These data relationships may link a person’s traits, past actions, social contacts, and social categories to people who were good or bad risks in the past. This process can replicate past discrimination or make assumptions about an individual based on group membership. (p. 110)

Learning Analytics, described as “the measurement, collection, analysis and reporting of data about the progress of learners and the contexts in which learning takes place” (Sclater et al. 2016, p. 4) uses existing data such as attendance, library usage and assessment grades to monitor and predict the risk of student success. In higher education, discussions about the ethical issues of learning analytics separate the methodological from the political, with generic reference to risks of stereotyping, prejudice, and bias (Slade and Prinsloo 2013). At the heart of this concern is the racial literacy (applied to methods, ethics and biases) of the programmer, analyst and end user. Few studies specifically reference the possibility of racial bias as Ahern (2018) does, using the statistics and framing of White privilege of Bhopal (2018), and aligning these practices under the gaze of QuantCrit. To the marginalised student, the risk of learning analytics is that identification and prediction will be based on an objective assessment of meritocracy and culturally deficit models will be used to explain low educational outcomes (Ekowo and Palmer 2016; Huber 2009). The voices and stories behind the algorithmically linked current and historical data will be unknown.

Student Surveys

Moving beyond analytics, the institution administers and analyses data generated within student surveys. Student surveys have been subject to significant methodological scrutiny but remain the dominant and normalised method data collection in higher education. The National Student Survey (NSS), as one example, has acquired criticism for its limited impact on course enhancement (Buckley 2012), poor proxy for teaching excellence (Gunn 2018), and dominant “fact-totem” within institutional decision making (Sabri 2013). The NSS has not been critiqued using QuantCrit, or explicitly scrutinised for any ethical concerns or bias apparent in data collection which may mean that the experiences of White students become the normative standard.

The NSS collects personal information regarding ethnicity, as self-declared on Higher Education Statistical Association (HESA) databases or institutional student records and uses this to benchmark providers against sector averages. This can provide useful distinctions in the appraisal of overall student satisfaction, which is known to be highest for White students over all other ethnicities (Mountford-Zimdars et al. 2015). This disaggregated data also becomes part of the analysis of split metrics in the Teaching Excellence Framework (TEF). The aim of the split metrics is to present each provider’s core metric by sub groups reflecting widening participation priorities (Department for Education 2017). These split metrics can be scrutinised alongside attainment data (which show national and institutional degree awarding gaps) and engagement data (which nationally shows BAME students are engaged but not attaining comparably good outcomes, Neves 2019). Disaggregation of student satisfaction and association with other measures is an acknowledgment that student experiences may differ, but the emphasis on improvement within marketised measures and KPI’s overshadows an emphasis on social justice. Furthermore, focusing on homogenised categorisation (BAME) without disaggregation avoids a critical framework for analysis.

As QuantCrit principles suggest, race categories are not fixed or innate and therefore, first-hand knowledge of minority experiences should also be explored. Whilst the TEF guidance requires providers to triangulate qualitative analysis alongside split metrics, numerical data dominates, and the regulator does not infer that the quantitative data itself is prone to bias. The uncritical participation in the TEF becomes another opportunity for bias as narratives overwhelmingly include positive and confirmatory examples of success. For example, it is unlikely that institutions will acknowledge evidence linking increases in student satisfaction to the proportion of academic staff in a department who said that they were White (Bell and Brooks 2019).

Student Evaluations

Whilst large institutional surveys have methodological issues which may hide marginalised students’ voices, local evaluations of teaching performance also deserve criticism for the impact on minoritised staff. Internationally, it is well documented that racial bias exists in student evaluations of teaching (for example, see Fan et al. 2019; Smith and Hawkins 2011; Reid 2010), although there has been more emphasis on gender bias. Student evaluations which platform student voices are also subject to misuse within HE management strategies (Jones-Devitt and Lebihan 2017). Baker (2019) specifically applies CRT and QuantCrit to the online teacher evaluation “Rate My Professor”. Discussing the evidence of inherent methodological issues in online evaluations of this nature, they conclude, “using a quantitative CRT approach, the results support that minority faculty are given lower teaching quality scores and higher difficulty of course scores than are non-minorities.” (p. 18).

Some of the problems with student evaluations include the lack of methodological knowledge informing design, gaps in data confidence and capacity in analysis which can inhibit interpretation. Rodriguez et al. (2018) suggest that HEPs are sites of a “racialized and gendered regime of power-knowledge” (p. 2) and highlight a gap in the literature on SET and of the underlying epistemologies that inform their deployment.

Other Methodologies

Quantitative data collection and use is not restricted to learning analytics, surveys, and student evaluations. Local quality assurance processes may report a variety of data sources for validation and during annual review for continuous improvement. One of the most prevalent issues in current HE, evident in the data analysed at module and course annual review, is the degree awarding gap. What follows is a myriad of intervention and initiative led responses to address racial inequity in degree attainment/degree awards, but few are robustly evaluated (Mountford-Zimdars et al. 2015). Even fewer overtly challenge White supremacy, a central component of critical race theory. Austen et al. (2017) reflect explicitly on a case study which struggled to implement strategies to enhance BAME student experiences and successes and cite “institutional readiness” as a multi-faceted explanation, noting specifically that “further research should look to focus on a structural (including institutional) analysis of critical Whiteness” (p. 156).

Effective evaluation of impact requires data confidence and highlights a need to invest in the development of an evaluative mindset. There is an increasing expectation of all HE for staff to be data literate, or methodologically savvy, to critique and question, and to gather additional data to support or defend against dominant quantitative conclusions aligned to postpositivist evaluations. The development of qualitative evaluation literacy has potential to critique racial biases at the point of design and this chapter actively promotes this development. This capacity building must include knowledge of a range of methodologies and measures, for example: participatory evaluation (Parker 2004 – who advocates for the use of storytelling in evaluations); decolonised methodologies which are culturally flexible (Gobo 2011); quantitative counter-storytelling in survey methodologies (Sablan 2019); and the use of community cultural wealth as a more appropriate measure of cultural capital (Stevenson et al. 2019). It is also important to explore the counterfactual and unintended outcomes for all stakeholders, moving away from the assumption that all outcomes can be foretold by the evaluator.

Building on this range of methodologies and measures, research conducted by staff and students which seeks to help the institution move towards its strategic aims also has potential to develop anti-racist approaches. Solórzano and Yosso (2002) outline a framework for CRT aligned research which includes positioning race and experimental knowledge as central to the research, challenging positivist dominance, and including explicit objectives around social justice. These principles could be adopted for IRE in higher education. However, with small proportions of BAME students in some universities, granular quantitative analysis or qualitative data must also consider the ethics of identification and assurances of anonymity.

Do You Really Know Your Students? Elevating Counter Stories

Box 14.1: Elevating Counter Story 1

Aaliyah’s digital story recounts her journey into higher education. Her audio narration begins by positioning herself as the ‘first generation to go to university’. She describes higher education as ‘not a priority’ and ‘not a requirement’ whilst the visuals juxtapose the British and Pakistani flags. There is an implicit tension here located within the family which has clearly been eased by following her sister’s footsteps through education. But this journey is about self-identity and personal courage. Failure and doubt were overcome by individual (‘I’ and ‘My’) decisions to ‘educate myself’ and go back to school. The very last image is the only personal photograph of Aaliyah who, we are told, is looking forward to her forthcoming graduation. (Length, Adobe Spark)

Only a brief analysis of IRE highlights some of the issues in the epistemologies, methods and samples used and reinforced by institutional and sector policies and practices. To counter the dominance of numerical data, metrics, measurements, compliance and homogenised collations of the singular “student voice”, this chapter now introduces digital storytelling as qualitative data which can provide contextual counter narratives as stand-alone artefacts or as supporting evidence.

The danger of a single story (Adichie 2009), a normalising story, the loudest story, is that diverse lived experiences can remain unknown and master narratives of racial privilege prevail. Amplifying unknown counter stories is a destabilising effect of critical race theory in action. Advocates of this approach note “potential moral and epistemological gains” which “challenge comforting stock stories and can thus be helpful in critiquing the beliefs of those in dominant groups who benefit from white privilege” (Delgado 1989 In Rolón-Dow 2011, p. 161). One technique used within this context is the use of “Chronicles” in which evidence is fictionalised into written vignettes and “presented in a novel form that challenges common assumptions and makes the work more accessible to people outside academe” (Delgado 1993 In Gillborn 2010, p. 254). More recently, digital storytelling—the process of developing a digital personal narrative—has developed as an accessible approach and has been specifically used to amplify hidden voices. Austen et al. (2019, p. 27) suggest that “the most common feature of recent approaches is the agency of the storyteller as editor, and the use of software which enables this.”

The effectiveness of digital stories, as a distinct mode of storytelling is detailed by Lovvorn as: (a) mobile, accessible and sharable through a range of web-based platforms; (b) personal and grounded in “stories of the indistinctive voices”; and (c) connective and connected, creating bonds between the authors, the viewer, and society (2011 in Trimboli 2018, p. 48). These stories result in a short video with linguistic, visual, audio, gestural, and spatial meanings (Gachago et al. 2014) approximately three minutes in length. Through this multimodal digital storytelling, we begin to hear personal accounts of resistance against dominant discourses (Gachago et al. 2014).

Using Digital Stories to Address Racial Biases

The use of digital storytelling to challenge/deconstruct normative assumptions and meta-narratives about Whiteness in education, or to provide a platform for unknown voices, is not uncommon, but has gained less traction within the UK. Rolón-Dow (2011) specifically proposes that digital stories are a useful tool for exploring critical race theory in pre-higher education concluding that “the digital storytelling medium, combined with a CRT framework, can be a valuable tool for initiating conversations about the raced experiences of youth and can provide valuable knowledge for those working towards greater racial justice within educational contexts” (p. 159). Matias and Grosland (2016) used digital storytelling as a pedagogical strategy for the emotional deconstruction of Whiteness. This was employed as a task for teacher candidates in one US higher education institution (HEI) on the basis of evidence that suggested that there was a privileging of hegemonic White identities throughout the primary and secondary teaching field. Digital storytelling provided space for reflection, created a repository which “prolongs courageous conversations of race beyond minor discomfort” (p. 162) whilst providing a mechanism “to withstand the discomfort with self-interrogating Whiteness” (p. 163). Similarly, Stewart and Ivala (2017) used this method as a reflective tool with student teachers in South Africa. This approach created highly personal stories, especially for marginalised students, who were able to discuss identity (including race and White privilege) in a liberating way. Mills and Unsworth (2018), in their analysis of multimodal literacies and critical race theory in Australian education, also found that these alternative forms of text offered a counter narrative to prevailing normative assumptions.

Digital Storytelling Student Voices in Practice

Student digital storytelling has developed into a methodology used by institutional researchers/evaluators and practitioners at Sheffield Hallam University (see https://blogs.shu.ac.uk/steer/digital-storytelling-shu/). When the then Director of Fair Access Chris Millward visited Hallam in 2018, digital stories of Widening Participation Student Ambassadors were sent to him in advance to provide institutional context. These powerful student stories detailed complex journeys to enrolment and the barriers to engagement for some students—mental health, first generation access, care experience, care giving, and disability were examples of some of the emotive content the students narrated. Student digital stories have also been viewed by senior leaders to add context and knowledge to strategic discussions.

Digital storytelling has also been adopted within the curriculum and stories have been created by whole module cohorts as reflective assessments (Austen 2020)—transition and “becoming” (Gale and Parker 2014) were some of the emerging cohort themes, alongside reflections on the curriculum and pedagogy of the course. The analysis provided evidence that digital stories were effective as reflective tools and this has relevance for knowing your students both within and beyond the curriculum.

Box 14.2: Elevating Counter Story 2

Hassan audio narrates his own journey to university and makes links between attending university and his faith. He makes explicit reference to the wishes of his parents to avoid “massive debt” and “act in accordance with my religion”, a decision which he deliberated and discussed with the researcher during the production of this story. He decides not to use any personal imagery, choosing instead a range of stock photographs of students who are racially diverse. After a period of deep reflection and time spent on an apprenticeship that his parents encouraged, he was able to conclude that “my faith did not prevent me from studying at university”. This stated ‘reflection’ masks the detail of going against parental wishes. (Length, Adobe Spark)

What About the Staff? Triangulating Stories Across the Institution

Differential student outcomes have been attributed (through association not causation) to a lack of belonging faced by Students of Colour and a lack of staff diversity in UK HEPs (Mountford-Zimdars et al. 2015). Whilst there has been an increase in staff categorised as BAME, there are inequities in contract type, salary band and subject area (AdvanceHE 2019a, b) which has had a specific impact on the diversity of senior institutional leaders including Professorial positions. It is important, therefore, to triangulate marginalised staff and student voices within an institution (and beyond) to realise behavioural and organisational change. Digital storytelling within IRE has the potential to amplify counter stories across an institution. Previous literature has noted the importance of storytelling in changing the activity and culture of an organisation (Boje 1991). Stories can be about other people, the work, the organisation itself or be told as a process of social bonding or as direct or indirect signifiers (Prusak et al. 2012). In higher education, the process of reflection is embedded within the personal and professional development of both staff and students. A coherent (and evidenced) story is an important component of a professorial application and a covering letter for graduate employment. However, for some, stories can be exposing “in ways that can be embarrassing, revealing some of their own anxieties, failures and prejudices” (Gabriel 2013, p. 118).

There is a rich history of storytelling within health organisations. Patient stories are used to improve the quality of care and staff stories are used to augment working practices (including www.patientvoices.org.uk) and provide an outlet for unknown voices through participatory engagement in marginalised communities (Briant et al. 2016). In higher education, one recent project (Austen and Jones-Devitt 2018), tested the use of digital storytelling in several ways: as an intervention for engaging in difficult conversations about positive cultural and behavioural change (a digital story was viewed in a focus group); as a method of data collection (this digital story was discussed in the focus group); as an innovative way of sharing evidence and expertise (a personal digital story was produced by some of the focus group participants). The focus was on discussing Whiteness, as an overlooked factor in actions to address the degree awarding gap (Jones-Devitt et al. 2017). Digital storytelling provided an effective mechanism for facilitating difficult conversations, however there was still a sense that these stories were contributing to awareness raising, but not necessarily behavioural change. The authors used Stacey’s (1996) model of organisational dynamics to highlight how levels of perceived comfort and neutralisation can interact as barriers and enablers for meaningful change. This is an important consideration for both the producers of counter stories and the intended audience, leaders of the change initiatives, and the wider organisation.

Box 14.3: Elevating Counter Story 3

One storyteller titles their story “Labels”. They begin by asking the questions “Am I black or white?” and show an image we assume is their own hand. They use a range of imagery which challenges the viewer to question whether they are personal or stock photographs. They remain anonymous, choosing to use textual annotations to tell this story, but makes references to specific places and spaces which personalise the content. Their voice is strong, and the negative voices of others appear in speech bubbles throughout to signify real events. The preservation of anonymity of the storyteller, and those implicated in the story is clearly important, and various techniques are cleverly employed to this end.

This storyteller reflects on their childhood experiences, university experiences, relationships and experiences within the workplace. There are varied examples of racism, discrimination and micro-aggressions outlined for the viewer. They use these experiences to challenge institutional culture – binaries and stereotypes – and choose buttons to symbolise diversity and homogeneity. (Length, Powerpoint)

Methodological and Ethical Considerations of Digital Storytelling in Practice

Digital storytelling is a multimodal methodology which draws on approaches within visual qualitative methods to frame data collection and analysis. In addition to using digital stories within a focus group they have also be used as applied theatre practice to capture spontaneous stories (Flagler 2018). The creation of stories can be supported individually within and beyond a workshop activity, or collectively using a story circle approach (see storycenter.org). Each story is treated as a data artefact. Sampling can be targeted to elevate counter stories. Using the principles of positive action – evidence of disadvantage and proportionate activity – alongside methodological justifications provides a clear defence against criticisms of bias.

The analysis of digital stories is complex and can apply techniques such as grounded theory (Austen 2020) or discourse analysis (Jewitt and Oyama 2001). This analysis may also require further exploration of the storytellers’ meaning and intent through additional research methods (Gachago et al. 2014).

Ethically, voluntary informed consent to discuss, create, publish, and analyse digital stories should be continuously sought. There should be an open and honest discussion with storytellers about anonymity, which if necessary, can be assured via digital techniques (stock images, no audio narration). Withdrawal should be an option that is not restricted to a time period and the benefits of participation, but not publication, should be acknowledged and respected. It is important that ethical reviewers are aware of these intricacies. Facilitators should be trained to support the storytelling process, and this should include the exploration of racial bias in grand narratives and the positioning of counter stories as other.

The risk of storytelling within this context – practical, emotional, reputational—should be explored with all storytellers. Trimboli (2018) warns that there is a risk that digital stories reinforce cultural norms and otherness (her reference is Whiteness in Australia). She suggests “digital stories are often celebratory, prescriptive, sentimental or nostalgic, and not always productive in engaging with the borders of the culturally diverse experience” (p. 55). The stories of People of Colour can be seen as heroic and politicised such that the impact on change is minimal.

Storytelling also risks appropriation by the privileged. Huber (2009, p. 650) warns that:

when adapted in educational research and pedagogical practice, it is important to recognize testimonio as a tool for the oppressed, and not the oppressor. Testimonio should not function as a tool for elite academics to ‘diversify’ their research agendas or document their personal stories.

Huber’s (2009) experience of defending her participants (Communities of Colour), her framework (LatCrit) her method (testimonio) and her epistemology (Chicana feminist), as academically robust is an important lesson for those embarking on storytelling approaches within organisations of higher education. These risks are not specific to digital storytelling; any discussions about Whiteness with members of the White majority risks privileging this discourse, and can offer an unpacking or off-loading of guilt without an obligation to positively change behaviour (Margolin 2015). This conclusion was reinforced during the aforementioned project (Austen and Jones-Devitt 2018).

Conclusion

Counter storytelling, in its various forms, has the potential to impact positively on higher education institutions and act as an anti-racist methodology. Qualitative digital storytelling can address data privilege and methodological biases that exist in educational policy-making and practice and challenge the dominance of quantitative data by seeking out unknown voices. This chapter has challenged those collecting data in HEPs, which the author has termed institutional researchers and evaluators, to critically reflect on racial biases which exist in their methodological and ethical practices. Furthermore, developing knowledge of a broader range of methodologies through evaluation literacy would recognise counter stories as valid (authentic and trustworthy) in their creation and use.