Introduction

The notion of students having a voice and giving feedback on their teachers and units is not new. Early references indicate that students have been expressing their views in ancient times in Antioch in the time of Socrates (Marsh 1987) and in medieval periods (Knapper 2001). Traditionally, evaluation systems were more commonly used to inform the improvement of teaching and learning. However the establishment of external quality assurance bodies (particularly in the UK and in Australia), and the ever-increasing requirement for quality assurance and public accountability, has seen a shift in the use of evaluations systems including their use for performance funding, evidencing promotions and teaching awards (Meade and Woodhouse 2000; Nilsson and Wahlén 2000; Lecky and Neill 2001; Massy and French 2001; Hendry and Dean 2002; Scott and Hawke 2003; Chalmers 2007; Barrie et al. 2008; Arthur 2009; Shah and Nair 2012).

A plethora of literature has been published about students’ rating systems, student evaluation instruments (focusing on their dimensionality, reliability, validity and usefulness), the dimensions of teaching effectiveness, student and teacher bias in questionnaire responses and the identification of excellent teaching and teachers (Marsh 1982, 2007; Abrami et al. 2007; Theall and Feldman 2007). Much of the research has been conducted in the US, Australia and Europe (Hirschberg et al. 2011) and numerous reviews provide a synthesis and critical review of the literature (see for recent reviews Richardson 2005; Perry and Smart 2007; Hirschberg et al. 2011; Alderman et al. 2012; Benton and Cashin 2012; Spooren 2012). However, there is a lack of published research world-wide on the quality of student feedback or on what students say. Studies investigating the correlations between written comments in open ended items and quantitative items of student evaluation questionnaires have reported mixed findings. Braskamp, Ory and Pieper (1981), in a study of 14 classes, found high correlations (0.9 and higher) between Likert scale ratings to student questionnaire items and written student comments. In contrast, Alhija and Fresko (2009), in a study of 3,067 questionnaires collected from 198 undergraduate units, found that students provided feedback on topics not captured by the questions posed reporting correlations between 0.2 and 0.5.

Whilst nearly all universities collect vast amounts of student feedback using evaluation instruments including student comments, there are limited tools available for analysing the qualitative feedback. New research is emerging in the role of text mining or educational analytics for analysing qualitative data in student evaluations (Campbell et al. 2007; Chen and Chen 2009; Jordan 2011). Text mining is an automated process that identifies relationships between texts to reveal patterns, frequency, and predicts the probability of relationships with other words. A tool called CEQuery, developed to analyse the graduate Course Experience Questionnaire (CEQ) comments (Scott 2005) is currently being used in Australia by some universities to analyse student evaluation comments (Oliver et al. 2006). CEQuery automatically classifies comments into five main domains (Outcomes, Teacher, Course Design, Assessment, and Support) and 26 subdomains. Others report using SPSS Test Analysis for Surveys to analyse qualitative data (Oliver et al. 2006; Pan et al. 2009). The CEQuery dictionary has been modified to suit the analysis of student comments at the unit level (Oliver et al. 2006). As there are only few tools available for efficiently analysing comments, this may account for the limited research in the field.

Student evaluation instruments typically include one to five qualitative items (most often three items) asking students about the most positive aspects of a unit and how it might be improved (Abbott et al. 1990; Sheehan and DuPrey 1999; Abrami et al. 2007; Oliver et al. 2008). This type of question is designed to assist in quality improvement of teaching and learning of a unit. More recently, there has been a shift in the focus of student evaluations and some suggestions have been made to change the wording of qualitative items to ask students about the process of their learning and intellectual development (Hodges and Stanton 2007).

Student comments provide valuable insights into the student experience (Braskamp et al. 1981; Lewis 2001; Zimmaro et al. 2006; Oliver et al. 2007; Alhija and Fresko 2009). It is generally accepted that, whilst online questionnaires generate lower response rates, students write more comments and longer comments in this medium when compared with paper-based questionnaires (Sorenson and Reiner 2003). Moreover, students tend to write more positive comments than negative comments in unit evaluation questionnaires (Braskamp et al. 1981; Sheehan and DuPrey 1999; Hardy 2003; Zimmaro et al. 2006; Oliver et al. 2007; Jordan 2011). Zimmaro and colleagues noted that the negative comments were more specific, focusing on aspects of the unit whereas the positive comments were more general in nature.

Despite the frequency of positive comments, student feedback continues to be a source of anxiety for some academics especially when they perceive the comments are unjustified, not constructive or cruel (Jordan 2011). Student feedback often reveals intellectual challenges faced by students in their learning that provide insights into scholarly teaching (Hodges and Stanton 2007). Limitations to qualitative feedback include: irrelevant statements and hurtful remarks and low numbers of unhelpful comments (Oliver et al. 2007; Jordan 2011). Jones et al. (2012) outlined the legal issues that potentially may be associated with student comments including: defamation; breach of duty of care, trust and confidence; and breach of right to privacy. In addition student comments, if used improperly may result in the dismissal of academics or may, at worst, harm their reputation (Jones et al. 2012). In Australia, students can give feedback on their teaching and learning experiences using unit and teaching evaluation instruments which are either separate questionnaires or are combined (Barrie et al. 2008). This study, of an Australian university, provides insight into the extent to which student comments are offensive and unprofessional and outlines the practices that are employed to educate students on giving appropriate feedback and manage the publication of student comments.

Background information

Curtin University, a large Western Australian university of over 47,000 students operating out of 16 locations, has used eVALUate, an online student evaluation system for gathering student feedback on their learning, since 2006 (Tucker 2013a, b). The system uses two separate validated and reliable instruments: a unit and a teaching survey (Oliver et al. 2008; Tucker et al. 2012). The eVALUate unit survey comprising 11 quantitative items ask students’ their perceptions of what is helping them achieve learning, their overall satisfaction and what they bring to the learning in terms of their motivation and engagement. The eVALUate unit survey also includes two qualitative items asking students to say what were the most helpful aspects of the unit and how the unit might be improved. The quantitative results of the unit survey instrument are available online to all students and staff in the university however the student comments are only available online to the unit coordinator and the head of school (Tucker et al. 2013). These reports are available online immediately after student results are ratified and become available to students (around 2 weeks after the evaluation period). A separate questionnaire, called the eVALUate teaching survey, is used to collect student perceptions of the teacher and their teaching. The feedback on the teaching survey instrument is only available to the teacher who requested the survey.

Within the online eVALUate survey system, students are prompted to give professional feedback and are given examples on how to be constructive. The advice to students is:

Be precise: Provide focused feedback that will allow us to determine what is working well for you, as well as what is not working so well.

Be specific: Wherever possible, try to provide examples to clarify your comments. Explain clearly why you are being complimentary or critical and be constructive by providing suggestions for how you think a unit could be improved.

Be professional: remember that your feedback will be used to improve your course. Provide feedback on how the unit designers can better help you achieve the learning outcomes. In keeping with the University’s Guiding Ethical principles, comments which could be considered offensive, abusive, racist or sexist are unacceptable.

In 2006, an analysis of all student comments collected from the unit survey instrument obtained (n = 29,491) from a Semester 1 evaluation event revealed ten abusive comments, that is, containing offensive language or personal insult (this represented 0.03 % of the sample). At that time, the University Student Charter and Guiding Ethical Principles guided students and staff behaviour. While the ten identified comments in 2006 were not acceptable or condoned, the University Teaching and Learning Committee recommended that there were too few abusive comments to warrant removing students anonymity in the system. There was and remains a widespread belief that loss of anonymity will be a major disincentive for student participation, and response rates will decline.

More recently, and following the release of evaluation reports in 2010, anecdotal evidence suggested that there had been an increase in abusive comments, and that this was not in keeping with expected behaviours described in the University’s Code of Conduct. This Code requires that all parties, including all academic and general staff; visiting, honorary and adjunct academics; contractors; and volunteers perform their duties professionally with respect, integrity, fairness, care, and without harassment, bullying or discrimination. The Code is explained in a ‘Guide’ which outlines the following statements in relation to the roles and responsibilities of staff in communicating professionally (by extension, it is fair to expect the same of students):

  • do not use electronic messaging in an unprofessional manner;

  • social networking—do not use inflammatory, racist or offensive language, and never upload offensive or explicit written, audio or video content; and

  • discrimination and harassment—under this theme, the following examples of harassment were provided—insulting or threatening language or gestures; phone calls, letters or messages on electronic mail or computer networks that are threatening, abusive or offensive.

However, there is no explicit Code of Conduct for students. In its place, the Student Charter, developed in partnership between the University and the Student Guild, sets out the expectations and responsibilities of students. Students are expected to:

  • inform themselves of, and comply with, all relevant laws, University Statutes, rules, by-laws, the University’s Guiding Ethical Principles, policies and procedures relating to their rights as a student;

  • behave in an appropriate manner within the learning environment, showing respect for both staff and fellow students at all times; and

  • embrace and recognise diversity.

In 2010, the University also published a document titled ‘Student Conduct: your rights and responsibilities’ explaining students’ rights, responsibilities, and the University’s Values (integrity, respect, fairness and care). The following is information provided to students on communicating using computers “You must not: stalk, bully or harass others” and on cyber bullying the information relates to “sending cruel text or email messages”. Although this document is disseminated to all students, the extent to which students read and understand the information is unknown. Similarly, the extent to which students know about or read the Student Charter is unknown.

The aim of this study was to investigate the number of student comments that were identified as being offensive or unprofessional in online unit evaluation instrument collected from an Australian university.

Methods

In order to determine whether there is an increase in the number of offensive comments in eVALUate from 2006, an analysis using the same methodology as was undertaken in 2006 was undertaken from one evaluation event in 2010 (the Semester 2 event). The total number of comments in the data was 44,876. As there were more student comments in this 2010 analysis, a randomised sample of two-thirds (30,684) comments, taken from 17,855 unit survey responses, was included in the analysis to be comparable to the 2006 sample. All comments were completely de-identified so that there was no way of knowing which student had made any comment. One person, an individual appointed at random, read all the comments and highlighted any comment that contained any word considered to be offensive or unprofessional. The person signed a confidentiality agreement specifically related to the sensitive nature of this task and ethics approval was granted from the University Ethics Committee. Comments were considered abusive if they were contrary to the spirit of Curtin’s Guiding Ethical Principles and contained: offensive language (e.g. swear words); racist, sexist or personally abusive terms; and allegations of misconduct or criminal behaviour. In response to staff concerns, raised formally at university meetings and informally at professional development sessions for academics, a decision was also made to identify comments which were considered unprofessional (that is, language or terms which most would consider not abusive but inappropriate in a professional setting e.g. crap, damn).

Comments were categorised as by level of inappropriateness (abusive or unprofessional) and by the intended target (e.g. teacher, unit, resource). In addition, a CEQuery analysis was undertaken to automatically classify the comments into five domains [Outcomes, Teacher (staff), Unit Design, Assessment, and Support] and 26 subdomains using a custom-tailored dictionary (as shown in Table 1). This analysis revealed the topics most commonly talked about by the students overall and in response to the qualitative questions in the eVALUate unit survey: the Most helpful aspects of the unit and How the unit might be improved.

Table 1 The domains and subdomains within CEQuery

Results

The total number of completed evaluation questionnaires submitted in the evaluation event was 43,084, a university-wide response rate of 43.3 %. Overall there was higher participation by female students (females = 47.5 %; males = 40.2 %), external students (external = 41.5 %); internal 38.1 %) and Australian students (Australian = 45.3 %; International 42.4 %). External students are enrolled in units requiring no face to face tuition (e.g. are fully online). Students aged 21–25 years were less likely to participate than students from any other age group (the age groups were 20 years and under; 21–25; 26–35; 36–45; and 46 years and over). The analyses also revealed that, for the quantitative items in the unit survey instrument, females and part-time students were more likely to agree with most items; these differences were only around 2 %. Greater differences (5–6 %) were reported in other groups: higher percentage agreement for items was reported by international students, students in older age groups, external students and students with higher semester weighted averages (that is, average grades). For the two qualitative items, 67.4 % included data in response to at least one qualitative item. Slightly more responses were made about the Most helpful aspects of the unit (55.5 %) than How the unit might be improved (51.5 %). Of the 41,906 unit survey responses, 64.1 % contained comments in response to at least one qualitative item. Students from Humanities and Health were more likely to provide comments (71.3 and 70.4 % respectively) than students from Science and Engineering (64.5 %) or Business (55.7 %). Table 2 shows the number of comments analysed from each faculty, the number of comments categorised as offensive or unprofessional and the target of the comment.

Table 2 Comments categorised by level of inappropriateness and by the intended target

The table shows that, in all, 12 (0.04 %) abusive comments were identified in this sample. Five abusive comments were directed at teachers and seven were targeted at teaching and learning experiences. This suggests there is a very small, albeit insignificant increase in abusive comments in eVALUate since 2006. Forty-four comments (0.14 % of the sample) were identified as unprofessional. Of these, seven comments were directed at the teacher and 34 were about units (four of these were about the textbook).

Analysis of the comments was not always straightforward: one comment, for example, referred to a teacher swearing to students:

A reduction in the amount of times the tutor said [swear word removed] would aid in not distracting students from the tutorial topic being presented.”

Another comment uses inappropriate language to report a positive experience with the unit:

“Its perfect. Don’t [swear word removed] with it.”

Both comments are included in Table 2.

The top five topics students commented most frequently about were the methods of teaching and learning, learning resources, teacher quality and attitude, assessment standards and teacher accessibility. For the Most helpful aspects of units, students commented most frequently on methods of learning and teaching in unit design, the learning resources, the quality and attitude of teachers, structure and expectations in unit design, teacher accessibility and responsiveness, standards and relevance of assessment and teaching skills. For the item on How units might be improved students commented most frequently on methods of learning and teaching, learning resources, structure and expectations of units, assessment standards and the quality and attitude of teachers.

Discussion

The findings of this study are consistent with that undertaken in 2006 (Oliver et al. 2007). Subgroups of students are more likely to participate in providing feedback using unit evaluation questionnaires; particularly female students, international students, those in older age groups and students with a higher semester weighted average. The findings that females were more likely to agree with most items, as did part-time students, international students, students in older age groups, external students and students with higher semester weighted averages are consistent. However, differences in student perceptions for subgroups are declining as student participation in evaluation questionnaires increases (Pegden and Tucker 2009).

Student comments are collected in large numbers by a university but the number of people who read and review the comments is often restricted. For example, at the University, the unit evaluation questionnaire comments are only available to the unit coordinator and head of school. The comments, usually 138 characters in length (many 350 characters of length) (Oliver et al. 2007) are used for course review and for research into specific topics of strategic interest to university executive (Tucker 2013a, b). These comments are analysed using CEQuery and SPSS Text Analysis for Surveys and are only reported under the themes generated using these tools (Oliver et al. 2006; Tucker et al. 2012, 2013; Tucker 2013a, b).

The results of this study show that overall; students’ comments are rich sources of feedback, commenting most frequently about the methods of teaching and learning. It is particularly notable that three of the seven most frequently commented on most helpful subdomains refer to teachers. Nevertheless, a very small number of comments made by students in unit evaluation questionnaires do contain words that are abusive and unprofessional. Abusive and unprofessional comments appear to be isolated and unsupported in other comments within the same unit. This suggests that the vast majority of students do not abuse the privilege of giving anonymous feedback. The findings of this study suggest that, on the whole, the current approach (on-going education on how to give professional feedback) is successful and that universities have a significant role in guiding the values and behaviours of staff and students. It is recommended that guidance on giving professional feedback should routinely be outlined in student publications regarding their rights, responsibilities and conduct. Professional development for all staff and students on communicating appropriately and professionally should be on-going and relentless to minimise the impact such incidents may have, including the possible legal implications highlighted by Jones et al. (2012).

One comment revealed that the teacher was using abusive comments in class and hence this comment could be considered as not being abusive. This comment reveals that the Code of Conduct was not adhered to by the teacher. It also highlights that the identification of abusive words alone, and possibly their removal, provides little insight to the problem being faced by this student. This case highlights the importance of the student voice in understanding their experience in university.

The unit coordinator and head of school are provide with guidelines, reinforced within the evaluation policy and procedures, on how they are expected to share the eVALUate results with their teaching staff and on their duty of care for their colleagues. These guidelines state that:

You may share comments which are general in nature (that is, in which staff are not identifiable) with all staff teaching in the unit. Some comments, however, could identify particular staff. It is appropriate to pass on those comments (both positive and negative) to the named staff member only. Any comments, in which staff are named, are confidential to that staff member (and to those charged with the coordination and management of the unit). Misuse of data from eVALUate reports will be dealt with according to [relevant staff agreements].

Student feedback which may be considered offensive or defamatory, (this includes racist, sexist, personal and abusive comments, and allegations of criminal activity) may NOT be passed on to any staff member, or any student, by either the unit coordinator or head of school/faculty. Under no circumstances will a comment be tracked to identify any student.

The swearing words identified in this analysis are frequently used in the context of social swearing, as well as in the media. They do not focus on religious matters, known as secular swearing; nevertheless, although the number of abusive comments is comparatively small, such swearing is absolutely unacceptable. Staff and students should never be expected to have to deal with anonymous abuse. By the same token, using anonymity to abuse others is a form of human weakness, and evidence from student comments suggests that teachers occasionally demonstrate the same unacceptable behaviour when they talk to students and give feedback, anonymous or otherwise.

A number of suggestions are often made by academics to mitigate the number of abusive and unprofessional comments. First, it has been suggested that we use software to remove swear words prior to the release of reports to the relevant coordinator. To date, we have been unable to source software that would be able to successfully remove words that are often spelt or annotated in creative ways. The manual reading and removal of comments is not feasible in such a large university as this would be both costly and would hinder the timely production of reports to teachers. A second suggestion has been to remove comments following the release of reports. This is a feasible option previously adopted within an evaluation system used within a school (Tucker et al. 2003, 2008). The following considerations based on the experiences of the author, should be taken into account prior to the removal of comments post publication. The comment should only be removed following scrutiny by a group, such as a committee, comprising student representatives. Clear criteria should be used to ensure words or comments are removed appropriately. It should be recognised that, the removal of words or comments post publication will not prevent the academic from seeing the comment or being offended by the comment.

Academics should be given strategies for interpreting reports, including the comments, as abusive or unprofessional comments are isolated and should be largely ignored. The academic should determine the proportion of negative to positive comments for interpretative purposes to assist them in determining if the comments are representative of the entire class or a small minority of students (Lewis 2001). Support and appropriate mentoring by peers and the head of school is essential. In the case where a comment is found to be abusive, it may be appropriate to have an approach whereby the students who make abusive comments forfeits their anonymity, and is identified. Such an approach would need to be thought through very carefully in terms of wording, definition of abusive comments, and any unintended consequences of this approach.

In order for universities to decide on their approach, it is recommended that student comments be reviewed as part of the institution’s review processes (Chen and Chen 2010). One objective of a university education is to prepare students to evaluate their own education and make valid judgements (Jordan 2011). Educating students to give professional and constructive feedback on their experiences is essential in preparing students for self-reflection and evaluation.

Conclusion

This study of over 30,000 student evaluation comments revealed that students commented most frequently about the methods of teaching and learning. Teachers were frequently praised. A very small number of students (0.04 %) provided comments that contained swear words, some of which were directed at teachers, more at the teaching and learning experiences. A larger number of students provided comments that were identified as unprofessional (0.15 % of the sample). It is recommended that universities adopt strategies to educate students and teachers in appropriate and professional ways of working together, in providing professional feedback to improve the student experience in teaching and learning and to support and mentor teachers in their academic careers. Where student comments are unprofessional or abusive, academics should be provided with strategies for interpreting student evaluation reports so that these isolated comments do not detract from the rich feedback that students provide for improving their teaching and learning experience.