Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Assessment in Organisations

Assessment in public and private organisations is a process or series of activities concerning the planned activities (of the organisation, group or individual) carried out in a formal manner, for the purpose of reaching an informed judgement, based on research, data processing and the interpretation of verifiable information, communication and negotiation between the organisational actors involved in the process. In public and private companies, assessment is a process adopted in the management of:

  • systems of planning and monitoring (budgeting). The assessment consists of an evaluation of the efficient use of the resources assigned to the various organisational units in relation to management objectives laid down in advance;

  • systems for the assessment of individual performance. In this case the assessment is an evaluation of the achievement of individual objectives laid down in advance, linked to the allocation or withholding of pre-defined incentives.

An assessment may therefore be classified on the basis of the objectives that give rise to and justify it.Footnote 1

Organisational assessment concerns two alternative objectives: the development and the monitoring of the organisational behaviour of actors. Monitoring assessment is an evaluation of the performance delivered in relation to the expected level of performance, aimed at verifying compliance with the agreements, rules and responsibilities of the individual actors or organisational groups, and resulting in the allocation or withholding of resources, incentives or sanctions.Footnote 2 On the other hand, in terms of training and development, training and development assessment results in an evaluation of the services provided in order to enable the individual undergoing assessment to gain insight into his or her shortcomings, with a view to improving performance in the future. Table 2.1 below provides an overview of the characteristics of the two types of assessment.

Tabel 2.1 Types of organisational assessment

The two types of assessment may be further distinguished by the type of contract relating to the objectives agreed between those performing and undergoing the assessment, the salient characteristics of which are shown in Table 2.2 below.

Table 2.2 Types of organisational assessment

The organisational aims and the types of organisational contracts are the two elements that mark the distinction between monitoring assessment and training and development assessment. Monitoring assessment is imposed on the individual making the assessment by the organisation, as a hierarchical responsibility, and this is reflected in relations with the individual who is subject to the assessment. This individual may fear the assessment because the (uncertain) outcome has implications in terms of rewards and sanctions. This type of assessment may also be problematic for the person carrying out the assessment, who is required to play the part of the judge. In the case of a negative outcome, it may have an impact on relations with the person subject to the assessment: these relations may be ongoing and may have an impact on the performance of the organisational unit on which the person carrying out the assessment will subsequently be judged.

Training and development assessment, although initiated by the organisation, is not imposed, but left to the discretion of the actors involved. Such assessment is seen as desirable by the person subject to it, who is prepared to take well formulated criticism from the person carrying out the assessment in order to improve his or her performance and to acquire new skills. The assessment is also accepted by the person performing it, because in relations with the person subject to the assessment, the role is that of mentor (providing support and assistance), protecting and improving relations with the person subject to the assessment and therefore also his or her contribution to the performance of the organisational unit.

The two types of assessment are distinct and alternative, but at the same time, they may take place in parallel. Training and development assessment is, albeit only in part, a performance assessment (the judgement expressed concerns individual merit); performance assessment is necessarily also a form of training (the evaluation is useful for learning and improvement). The element that distinguishes the assessment and legitimates its alternative function is the underlying organisational objective (why it is carried out) that characterises all the remaining organisational variables.

2 Assessment by Students in Italian Universities

Assessment by students presents certain specific characteristics that it is worth examining. The procedure takes the form of a monitoring assessment. This is required of all Italian Universities, as laid down by Act no. 370/1999, Article 1(1) of which requires Italian Universities to set up an internal system of assessment of the teaching programmes. Article 1(2) requires the assessment unit to carry out a periodic survey of the opinions of students about the teaching programmes and to submit a report to the Ministry of Education, Higher Education and Research and the national assessment unit (CNSVU) no later than 30 April each year. The Ministry uses student evaluations for decision-making in relation to two matters: the setting up of courses and the allocation of funds (the 3-year planning fund). In particular, student evaluations are used as:

  • a quality assurance instrument, specifically as an indicator of effectiveness (the level of satisfaction of the students in relation to specific courses, pursuant to Article 1(2), Act no. 370, 19 October 1999, for the approval of courses to be implemented (Ministerial Decree no. 244/2007, Ministerial Decree on the necessary requisites for the setting up and implementation of courses);

  • a quality indicator (Article 11(3), the percentage of courses in which the evaluation of the students is above the national average, in relation to the faculty groupings defined in relation to the provisions of Annex A.2, Ministerial Decree no. 362/2007) for the (ex post) evaluation of the results of the implementation of the University programme for the 3-year period 2007–2009 (Ministerial Decree no. 506/2007 (Indicators for the assessment of the results of the 3-year programme 2007/2009).

The assessment by the students concerns the quality of the service provided by the lecturer, the Faculty Council and the Academic Senate. It seems appropriate and useful to refer to the concept of service (and to the models proposed by the related scientific research) since teaching is a set of intangible activities, utilised by the client (the student) who pays for the service in a regulated market (in which the academic qualification has a legal value).Footnote 3 As a result the judgement expressed by the student is necessarily subjective, but no less reliable for this reason. The service consists of the performance of certain activities supplied by organised actors. In the specific case, the evaluation by the students concerns the package of services supplied by a range of actors, as individuals and groups.

To be precise:

  • the individual lecturer is subject to assessment with regard to the delivery of the principal service consisting of teaching (for example, with regard to the planning of the course (contents and teaching methods), the amount and quality of the teaching materials, the clarity of the explanation, the level of interest aroused in the students, supplementary teaching activities, availability);

  • the faculty members in the Faculty Council with regard to teaching resources, the use of which is of central importance in the individual Faculties (teaching programme, lesson timetable, number of examinations, tutorial services, accessibility of the library, and so on);

  • the academic and administrative staff who serve on the Academic Senate with regard to auxiliary resources (the teaching facilities, such as the number of lecture rooms, laboratories, computer workstations and libraries and their quality, the services for providing support for students (bursaries), the administrative facilities (student registration offices, placement, career guidance, and so on). The quality of these auxiliary services, together with that of the central services, is to induce the student to choose a particular university rather than those with which it is in competition;

  • the group of academic and non-academic staff who make up the internal assessment unit, which, pursuant to the legal provisions, is responsible for the quality of the services for the data collection, processing and dissemination of the student evaluations in relation to the student body, the University, and the Ministry.

In short, the assessment by the students is an organisational process that is intended to evaluate the services provided by multiple actors in the University (individually and in groups).

3 Assessment by Students as an Organisational Process

The assessment by the students considered as an organisational process may be examined in three analytical perspectives: the measurement, the cognitive, and the strategic perspective.Footnote 4 Priority given to one of the three perspectives will have implications for the quality of the assessment, the planning of the process, the organisation of data collection, the processing of the assessment data, and the dissemination of the results of the assessment.

The measurement perspective conceives of the assessment as individual decision-making by the person performing the assessment (the student), who is required to formulate an accurate assessment with an adequate instrument of measurement, i.e. the assessment form or questionnaire. In other words the assessment is a problem of measurement that concerns in particular the scale of evaluation to be utilised for the purposes of the reliability of the assessment (with regard to the stability of the assessment by the students), as well as the type and number of questions, and the methodology for processing the data collected. This perspective, based on the assumption that the assessments carried out are reliable from a technical point of view, was found to be of limited interest when it was shown that the format does not influence the quality of the evaluation in a consistent manner, or that there is no particular format that is significantly better than others.Footnote 5

The cognitive perspective places emphasis on the study of the cognitive processes of the person carrying out the assessment, because the quality of the assessment depends on these processes. The student has a perception of the teaching environment (the teachers on the individual courses, the other students on the course, the teaching rooms, and so on) and memorises these experiences in the form of cognitive structures (schemes, scripts, cognitive maps, prototypes, examples). They are utilised by the student in the perception of the stimuli transmitted by the lecturer. The assessment is the result of the codification, processing and interpretation of the stimuli transmitted by the lecturer that the student commits to memory. In a cognitive perspective the limited information about the performance of the lecturer (for example, the preliminary stages in which the teaching material is planned and prepared) and the limited powers of reasoning of the student, are overcome by the cognitive strategies adopted. They consist in the use of heuristic principles in decision-making (those pertinent to evaluation are ready availability, representation and anchoring) or the use of cognitive short-cuts giving rise to problem-solving in a simplified form, without having access to all the necessary information and the computational ability necessary to process it. The use of heuristic principles in decision-making may give rise to bias in the assessment (in the specific case the effects are indulgence, strictness and proximity).Footnote 6 In order to reduce or prevent the assessment errors depending on bias, the cognitive perspective proposes two measures to improve the reliability of the evaluation: the improvement of the capacity of judgement and the more efficient utilisation of the information held in the memory of the person carrying out the assessment. The first measure can be implemented by means of training courses aimed at enhancing the understanding of the extent of the service to be assessed, and the proper use of the assessment scale. The second measure consists of the keeping of a diary by the person carrying out the assessment in order to make more effective use of the information available about the service to be assessed.

The cognitive perspective gives priority to the evaluation of performance in employment relations between managers and subordinate employees, whereas assessment by students presents particular characteristics distinguishing it from this situation, as noted above. With regard to assessment error, the research by Schein and Hall on assessment data collected from two groups of students attending undergraduate courses (with no work experience) and master’s courses in management (with previous work experience) suggests that assessment by students is subject to limited bias, and as a result the quality of the evaluation is not undermined.Footnote 7 The two groups carrying out the assessment, differentiated in terms of experience and therefore also memory, provided convergent judgements on the qualities distinguishing a good lecturer from a bad one, that is to say, intellectual and communicative ability, energy and personal enthusiasm, and the level of commitment and responsibility in performing the teaching role (providing support for learning). In particular, the assessments of good lecturers (those from whom the students had learned the most) were more extreme than those for the bad teachers.

The limit of the two perspectives outlined above is that they conceive of assessment by students in isolation from the organisational context in which it takes place. As a result the lack of reliability of the judgements expressed by the students may be explained by technical shortcomings relating to measurement (for the measurement perspective) or by cognitive limits, in particular decision-making bias (in the cognitive perspective). In other words, the quality of the assessment may be undermined by unintentional factors which the person performing the assessment is unaware of, according to these two research perspectives. In fact, as already noted, assessment concerns the performance of actors and groups of actors with interests that are partly shared and partly distinct with regard to the results of the assessment expressed by the students and their dissemination. In the logic of monitoring assessment, the judgements of the students can be and are utilised as a means of influence or power among the actors concerned.

The strategic perspective considers assessment in terms of organisational games, including power issues.Footnote 8 This perspective conceives of any organisational system (including individual universities) as a political system, of an indeterminate type, never completely controlled or regulated, underlying its existence as a social system. It is a universe characterised by conflict in which actors make rational use of the sources of power at their disposal. In the organisational system there are no common objectives, but only shared objectives because the division of labour assigns to each actor/group a particular and limited objective. Each actor has an interest in considering limited objectives as general in order to give greater value to their contribution to the survival and development of the organisation. Reference may be made in this connection to claims by faculty members about the superiority of their courses or area of study.

Organisational rules delimit the area of uncertainty of individual and group behaviour, but are never completely binding on individual actors, who always maintain a certain degree of freedom, and the possibility to negotiate. The degree of freedom of individual actors is a source of uncertainty in relation to their behaviour in dealings other actors and the organisation as a whole. Every actor therefore has a degree of power over the other actors, that may be used to reduce the interdependence between the actor and the others. In this organisational context, on what conditions is it possible to achieve cooperation among the actors who carry out interdependent activities, enjoying a degree of freedom and pursuing divergent if not contradictory interests, for the realisation of shared objectives? In order to achieve negotiated cooperation between the actors, the strategic perspective proposes the concept of organisational game. This is the instrument devised by the actors to regulate cooperation between them because it conciliates freedom and constraint. Players remain free, but in order to win are required to:

  • comply with the rules (because they assure a continuity of relations between the actors);

  • partially satisfy the expectations of others. Each actor exerts power over others in a reciprocal manner, and allows others to exert power over him/her. Other actors become a limitation;

  • adopt a rational strategy in relation to the nature of the game.

In conclusion, the game is always a matter of cooperation, and the outcome is the achievement of the shared objectives of the organisation. The rules of the game determine the possibility of winning or losing, delimiting the range of winning strategies that may be adopted by the actors. Each actor behaves simultaneously in order to limit the other actors, taking advantage of the opportunities in the game to improve their situation (offensive strategy); deal with their attempts at delimitation by widening their margin of freedom and their powers of action (defensive strategy). There is no irrational behaviour, but strategic behaviour, that is stable and autonomous, and the regularity of this behaviour needs to be identified and observed empirically in relation to the organisational context.

The crucial factors of uncertainty for the organisational system relate to four sources of organisational power available for each actor, that may be defined in connection with the organisational structure of each University as follows:

  • possession of a particular skill (relating to research and/or teaching), either professional or contextual, concerning relations (regarding the specific organisational structure of the University and the higher education system) that would be difficult to replace;

  • influence over relations between the University and its environment (local, ministerial, and so on) determine the balance of power, due to the indispensable role of the University as an intermediary and interpreter between different and at times conflicting agendas (suppliers, students and their families, businesses, institutions);

  • the control of communications and the flow of information, since the place occupied in a network of communications and the means of transmission of information (that may be delayed, filtered, or manipulated) has an impact on the ability of the recipient of the information to act. Communication may take place in return for safeguards and favours;

  • the existence and implementation of organisational rules. The normative framework limits the powers of those in a subordinate position, but also the arbitrary power of those at the upper end of the hierarchy, since they may not have the means to obtain from their subordinates any more than the rules provide for.

The game of assessment by students requires the involvement of various actors, both individual and collective, in the University. Each of them may count on sources of power giving rise to a degree of uncertainty in their relations with other actors, and to specific forms of behaviour (defensive or offensive) as summarised in Table 2.3.

Table 2.3 Actors, sources of power and strategic behaviour

For each actor the game of assessment by students represents an opportunity or a threat to their area of autonomy. The resulting organisational behaviour gives rise to alliances among those with common interests. The virtuous Faculties will attempt, in a unified fashion, to benefit from the allocation of resources, at the expense of the inefficient ones. The assessment unit should be able to count on the support of the Rector of the University and the student representatives to publish the results of the assessment, not just in aggregate terms but also course by course.

In practical terms the strategic perspective conceives of assessment as a process of measurement and communication that is rooted in and characterises the organisational relation between those carrying out the assessment and those who are subject to it. It is based on the assumption that all the actors involved in the process are active participants who take part in the game, and that within the regulations, pursue their personal agenda in a discretionary manner.

In the strategic perspective the quality and reliability of the assessment by the students of the performance of their teachers reflect the conscious choice of those carrying out the assessment to express or not to express their secret knowledge Footnote 9 about their teachers’ performance.

In other words the strategic behaviour of the student not to express an opinion, or to supply an unreliable opinion, should be seen as a conscious choice that is a matter of convenience (because it safeguards or enhances the relationship with the lecturer), rather than due to a lack of skills or the ability to carry out an assessment. The failure to provide an assessment or to provide one that is unreliable reflects the position of conflict of the student, which, if collective, gives rise to the need for organisational strategies that recognise it and make it explicit. The collection and dissemination of assessments of teaching programmes by students may reduce this conflict and result in changes to the organisation.

4 The Content of Assessment by Students

Students are asked to carry out an assessment of the teaching services provided by the University at which they are enrolled. In order to understand the content of the assessment that is required of them, reference may be made to the hierarchical model proposed by Kirkpatrick,Footnote 10 that distinguishes between four types of content to be evaluated by participants in training courses, corresponding to four different levels:

  • level 1: response

  • level 2: learning

  • level 3: organisational behaviour

  • level 4: organisational results

The model states that each change brought about by the training programme (level) in turn produces effects on the following level on the basis of a cause and effect relation, and a hierarchical order (from level 1 to level 4). Specifically a positive response influences the motivation to learn; learning in turn gives rise to new behavioural expectations, leading to better results for the organisation.

The response reflects the degree of satisfaction of the participant in relation to the experience of the course.Footnote 11 The response may be defined as the degree to which the participants liked the course. An evaluation of the response is similar to the measurement of those taking part in a conference, that does not include the measurement of whether they have taken part in a learning process. The level of satisfaction with the teaching programme therefore does not provide a guide to the effectiveness of the course. The response expresses an evaluation of different aspects of the course: the degree to which it meets the expectations and needs of the participants, the topics examined, the lecturer, the teaching material, the degree to which it was perceived to be a welcoming experience, and practical aspects (teaching rooms, laboratories, facilities), and the other participants on the course. The response may change over time in relation to the experience of the participant. The response may be positive in terms of the topics dealt with even if they are considered to be of limited utility to the participant. Hence the need to evaluate the next level.

Learning may be considered in terms of the development, thanks to the course, of the knowledge of the participants (knowing), their skills (knowing how to do), and their attitudes (knowing how to be).

In the specific case of university courses it should be noted that the assessment of behaviour concerns in particular the evaluation of the acquisition of explicit knowledge, in other words objective knowledge (the result of scientific research) that is abstract and may be codified, formalised and therefore transferred and utilised by the participant.Footnote 12 An evaluation of behaviour is functional to understanding the effectiveness of teaching methods utilised during the course. Learning does not necessarily lead to the automatic application of what the students have learned in class. Hence the need to evaluate the next level.

Behaviour consists of the transfer in the workplace of knowledge, skills (knowing how to do) and attitudes on the part of the students. Behaviour is situated in the workplace and not in the classroom; it is therefore influenced by the organisational context that can facilitate or inhibit the types of behaviour expected by the organisation. Behaviour is difficult to measure because it is hard to predict when and whether it will take place. However, it is important to verify it in order to monitor the effectiveness of training. Hence the need to evaluate the next level.

The results achieved by an organisation consist in its overall performance. In the specific case the results of the activities of the University are of two types:

  • efficiency in the use of resources (lectures, lecture theatres, technical and administrative staff) in supplying teaching services under the supervision of the Ministry and the national assessment unit (CNVSU);

  • the contribution to the creation of value for the end-user (graduates, businesses, public bodies) by the means of the quality of the knowledge transmitted to the students.

Each of these elements of training subject to assessment requires the use of specific monitoring instruments, as shown in Table 2.4 below.

Table 2.4 Elements and instruments of assessment in training

Kirkpatrick’s model has been integrated by Hamblin,Footnote 13 who argues that the content evaluated at each level is useful insofar as it can be compared with a corresponding initial objective (teaching). The initial objective laid down in advance and relating to the response determines certain choices in teaching programmes, that will elicit a response that may be appreciated and compared with the initial objective. In the specific case in order to fully appreciate the student responses, the individual faculty members, the Faculty Council and the Senate should lay down the initial objectives in advance in terms of response, learning, behaviour and results, that is to say the teaching objectives or descriptors.

The positive aspects of the Kirkpatrick model may be summarised as follows:

  • responses are recorded at a low cost at the end of the courses, since they are based on pencil and paper questionnaires;

  • the evaluation of teaching is feasible when it is a matter of assessing practical knowledge and abilities, as in the case of technical education;

  • the evaluation of organisational behaviour, specifically when the types of behaviour that are expected and the foreseeable exceptions in the interaction between persons and machines, is possible at low cost.

The limits of the model are constituted by:

  • the fact that the nature of the model is deterministic. It has not been proved scientifically that the positive outcome of an assessment at Level 1 determines the chances of success in the later levels;

  • the final evaluation of the training provided (in class or in the workplace) conceals any critical aspects in the initial phases (analysis of educational needs and training) if these needs are not laid down at the planning stage of teaching programmes;

  • the evaluation of behaviour and the organisational results is critical for the available resources, costs and the time difference between classroom teaching and the workplace.Footnote 14

This overview of the positive aspects and the limits of the Kirkpatrick model makes it possible to make an informed evaluation of the positive aspects and limits of the responses to the teaching programmes undergoing monitoring. First of all it must be noted that, together with the learning results, the evaluation by students is the only formal assessment carried out in all Universities, that are not in a position (or do not deem it to be beneficial) to evaluate all the other aspects of the teaching process. These two aspects are useful for an evaluation of the teaching provided and the learning that takes place. The assessment by students is also the aspect of teaching programmes that is most widely measured among the four organisational levels due to its low cost, facility of implementation, speed of feedback provided by the participants, and above all, the fact that it is an indicator of quality in a perspective of customer satisfaction. The assessment is a matter of perception, and therefore subjective, that reflects the experience of the participants in a situation of cognitive dependence, concerning the aspects of the course that they are aware of from direct experience, in other words the context, the how and to a limited extent the what (that they will be able to fully evaluate after the course in the workplace). It is a judgement limited to the relation between the faculty member and the student, necessarily limited to the processes taking place in the lecture room, and it is of value to both parties as they seek confirmation of their respective roles and behaviour.Footnote 15 In a service management perspective, assessment by students is a useful form of feedback for the lecturers. This is all the more the case when the lecturers (and the organisation designing the questionnaire) state the objectives and the purpose of the evaluation by the students in advance.Footnote 16 Finally it may be argued that a positive response will have a positive impact on the atmosphere in the lecture room and on the later stages of the programme, although there appears to be no scientific evidence in support of this claim.

Student responses are the aspect that has attracted least attention from academic researchers, who tend to focus on the other levels. This explains the current value and longevity of the Kirkpatrick model. The scientific literature has identified three key aspects to explain the response in terms of satisfaction by participants at the end of the course: the perceived effectiveness of the course; the perceived utility of the course; the perceived effectiveness of the performance of the lecturer.Footnote 17 These three determinants in turn are explained or include further specific items that have an impact on them.

The perceived effectiveness of the course includes the course facilities (accessibility, coffee break facilities, suitability of the lecture rooms, air conditioning, acoustics, furnishings, teaching resources such as blackboards, whiteboards, and simulators, the chance to communicate, Internet workstations, and so on); the organisation of the course (timetables, number of sessions, teaching load, total length of the course); the quantity and quantity of the teaching material.

The perceived utility of the course of study may be explained by the perception of acquiring competences (knowledge and skills) necessary for performing work (currently in progress) in a more effective manner, and/or to improve one’s role in the organisation (prestige, self-confidence, and so on); the perception of personal growth or development for the long term, either within or beyond the organisation; and the perception of a proper balance between theoretical and practical aspects of the course.

The perceived effectiveness of the performance of the lecturer depends on mastery and expertise in the topics examined; the teaching style adopted during lectures; a consistent and varied use of teaching methods (lessons, guided discussion, group work, role play, case studies, workshops) and effective time management (complying with the timetable).

A study by Giangreco, Sebastiano and Peccei aimed to verify in an empirical fashion the results of existing scientific research, and attempted to answer the question: which of the three factors (the perceived effectiveness of the course of study; the perceived utility of the course of study; the perceived effectiveness of the performance of the lecturer) identified in the scientific literature had the greatest influence in terms of the satisfaction of the course participants?

The study was carried out using 2,697 completed questionnaires of the 3,698 distributed, representing 72.9% of those taking part in the courses in the province of Varese funded by Fondimpresa, the bilateral inter-category fund (set up by the social partners Confindustria and CGIL, CISL and UIL), in the context of the PISTE programme (process innovation, new technologies, development of management systems, marketing). The questionnaires were filled in by high-school and university graduates, blue- and white-collar workers, and middle managers in 208 undertakings, of all sizes, from micro enterprises (less than 10 employees) to medium-sized to large companies (more than 250 employees). The period in which the courses were run was from March to December 2005, during which time 7,230 h of training were provided as part of 307 training modules.

With regard to the research methodology, the overall satisfaction with the courses is a dependent variable explained by three independent variables (the perceived effectiveness of the course of study; the perceived utility of the course of study; the perceived effectiveness of the performance of the teacher). The end-of-course questionnaire consisted of 13 items, three of which were related to the effectiveness of the course, five to the utility of the course, and five to the effectiveness of the performance of the teacher. The questions were based on a five-point Likert scale (from 1 = total disagreement, to 5 = total agreement). The hypotheses to be tested were examined by means of standard deviation and multiple regression.

The results of the research may be summarised as follows:

  • the three perceived factors (the independent variables), although interrelated, are distinct in influencing the overall satisfaction of the participant (the dependent variable);

  • the three perceived factors taken together have a significant impact on overall satisfaction;

  • the utility of the course is the most useful predictor for overall satisfaction, followed by the effectiveness of the teacher and the organisation of the course;

  • the performance of the teacher does not compensate for any shortcomings in terms of the content and organisation of the course; in the same way the quality of the contents and the organisation of the course do not offset any shortcomings in the performance of the teacher;

  • the level of satisfaction recorded among the participants was on average higher for the courses with “ soft” (relational) contents compared to those with “ hard” (technical) contents.

The research outlined above, albeit within the statistical limits pointed out by the authors, provides material for discussion about the use and utility of assessment by the course participants and the need to ascertain whether it presents similarities to the assessment by students.

5 The Case of the University of Sassari

The case examined in the present study is based on the personal experience of the author in his capacity as President of the Assessment Unit of the University of Sassari. The case is of particular interest in that the Assessment Unit introduced the publication of the results of the assessment by the students not just at aggregate level for Faculty courses, but also at the level of individual courses for each lecturer. This is not the first time that results have been published in this way: the University of Venice was the first to take this step, but the experiment was immediately terminated due to the opposition of faculty members.

The evaluation of the courses by the students was carried out by means of the administration of a questionnaire, extensively used at national level, replicating the evaluation of teaching programmes adopted by the national assessment unit (Comitato nazionale di valutazione) in 2002 (Document no. 09/2002) to safeguard the homogeneity and the comparability of data at national level.Footnote 18 In the academic year 2006/2007 1,360 university courses were subject to monitoring out of an estimated 1,659 courses activated. The rate of coverage was 82%. The objective to be achieved over the next two academic years is to bring this figure as close as possible to 100%. The questionnaires collected totalled 27,303, with 3.3 questionnaires collected for each active student. The result of the evaluation was that 90.71% of the university courses received a positive evaluation, whereas 9.29% were given a negative evaluation.

From the very beginning the results of the assessment were published and commented on at aggregate level for each Faculty and University, and reported to the faculty members responsible for the courses assessed, and to the Deans of the Faculties in the form of disaggregated data. In this connection the guidelines relating to the teaching responsibilities of faculty members state that The Faculties and teaching structures involved shall publish the results of the teaching activity carried out by the faculty members, as shown by the findings from the Internal Assessment Unit of the University and by other forms of evaluation carried out by the individual Faculties and teaching structures.

Experience has shown that the potential for the collection of questionnaire data has been developed in a limited manner. A survey carried out in recent years among Faculty Deans has shown that assessments by students have only a partial application. Further evidence in support of this claim is to be found in the repeated requests by student representatives in Faculty and university councils to provide more effective feedback in response to their observations.

The Assessment Unit, in response to the most recent requests put forward in a responsible manner by the student representative at the University Conference on Teaching Services, took the decision to make the assessments by the students for individual courses available in a transparent manner on an experimental basis. This decision was taken in order to make the exchange of information between faculty members and students more symmetrical, and to provide the University with reliable information for planning future teaching programmes, in order to develop the scientific community of faculty members and students in the various Faculties.

After informing the Rector and all the Faculty Deans, the Assessment Unit decided to go ahead with the publication of the results on the University website (showing the mean values recorded) in relation to the individual courses subject to assessment, starting from the academic year 2006–2007. Reflecting the experimental nature of the initiative, the Assessment Unit made provision, at least in this initial stage, for individual faculty members to be exempted from the publication of the results. On an experimental basis, access to the data relating to the evaluation of the students attending the courses is to be confined exclusively to students and faculty members of each individual Faculty by means of a password issued to those entitled to access the data, in order to guarantee access to all the stakeholders in each Faculty, but not to external actors.

Table 2.5 Approval by faculty members for the publication of their course assessments

As a result, for faculty members who granted permission for their results to be distributed, it will be possible to examine (for each course subject to assessment by the students) the mean values obtained for each variable, for the academic year 2006/2007. With regard to faculty members withholding permission for their results to be distributed, the data to be made available (with the remaining data blanked out) will consist only of those variables not directly relating to the faculty member.

The decision of the Assessment Unit gave rise to contrasting reactions: alongside certain faculty members and faculties raising objections, there were others who gave their approval. At a practical level the game of assessment gave rise to responses that were perfectly comprehensible in a strategic behaviour perspective. It should be noted that the argument that nearly all the faculty members put forward to justify their refusal to distribute the data concerning the judgement of the students on their courses was the violation of privacy (of the faculty member). In this connection mention should be made of the rights and duties of university students as specified on the website of the Ministry of Higher Education and Research, Title III, Article 86, page 86, that states: The publication of results deriving from the analysis of the assessment forms, for each course of study, shall be carried out for all the Degree Courses of the University by suitable means. The results of the assessment forms filled in by the students shall be evaluated by the Assessment Unit of the University, with regard to the overall functioning of the University, and by the Joint Committee on Teaching, with regard to the provisions concerning the Faculties.

In concluding this overview of the case of the University of Sassari, the figures concerning the granting or withholding of approval by the faculty members for the publication of the data concerning their courses is shown in Table 2.5.

The case shows all the dimensions of the assessment of university teaching by students in a University described in the first part of the chapter: the content of the assessment, the technical tools, the power strategies of actors involved in the process. The main consideration of the case is the following one: a multidisciplinary approach which weighs the assessment as an organizational game is feasible to ensure an efficient assessment of university teaching by students.