Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

5.1 Introduction

‘Assessment’ seems to have become a term that means what you want it to mean. Sadler (2007) observed “that many of the terms we use in discussion on assessment and grading are used loosely. By this I mean we do not always clarify the several meanings a given term may take even in a given context, neither do we necessarily distinguish various terms from one another when they occur in different contexts. For example, the terms ‘criteria’ and ‘standard’ are often used interchangeably”. Yokomoto and Bostwick (1999) chose ‘criteria’ and ‘outcome’ to illustrate problems posed by the statements in ABET, EC 2000 (ABET, 1997). In this text there is room for plenty of confusion between such terms as ‘ability’, ‘appraisal’, ‘assessment’, ‘capability’, ‘competency’, ‘competences’, ‘competencies’, ‘criteria’, ‘criterion-referenced’, ‘evaluation’, ‘objectives’, ‘outcomes’ and ‘objectives’, all of which will be used in the text that follows. Semantic confusion abounds and there is disagreement about plurals as for example ‘competences’ or ‘competencies’.

McGuire (1993), a distinguished medical educator, argues that the use of the term performance is a naive disregard of plain English. We are not concerned with performance per se, “rather we are concerned about conclusions we can draw and the predictions we can make on the basis of that performance. And this is a high inference activity”.

Assessment is now commonly used to describe any method of testing or examining that leads to credentialing, and as such embraces ‘formative’ assessment which while not contributing directly to credentials helps students improve their ‘learning’, and ‘self-assessment’ that may or may not contribute to those credentials (George and Cowan 1999). It may be carried out in real-time (Kowalski et al. 2009).

All such testing has as one of its functions the measurement of achievement which ipso facto is a measure of a specific performance. That seems to be the way it is taken by the engineering community for a crude count of papers presented at the Frontiers in Engineering Conferences between 2006 and 2012 in the area of assessment, yielded some 80 (of 100) that had some grammatical form of the term ‘assessment’ in their title that might be relevant to this text although only one them included the term ‘performance’. Eleven included the term ‘competence’ which is clearly related to performance. Of these ten were of European origin.

Miller (1990) a distinguished medical educator said competence was the measurement of an examinee’s ability to use his or her knowledge, this to include such things as the acquisition of information, the analysis and interpretation of data, and the management of patient problems (Mast and Davis 1994). Therefore, this discussion could clearly relate to the testing of academic achievement, however, in this case that is only relevant in so far as it predicts the performance of individuals in the practice of engineering, and many attempts have been made to do this (e.g. Carter 1992; Nghe et al. 2007; Schalk et al. 2011). Much of the recent concern with the assessment of performance has been driven by politicians seeking accountability in order to try and demonstrate that value is added to the students and the economy by the education they have received (Carter and Heywood 1992), and that is nothing new.

Given that the purpose of professional study is preparation to undertake work in a specific occupation a major question is the extent that a curriculum prepares or does not prepare students’ to function satisfactorily in that occupation. Assessments that make this judgment may be made at both the level of the program and the level of student learning. In recent years in the United States in engineering education, there has been much focus on the program level for the purpose of accreditation. There is no equivalent to the ‘clinic’ used for the assessment of persons in the health professions and teaching for engineering students except in cooperative courses (‘sandwich’ in the UK) where the students interweave, say six months of academic study with six months of industrial training throughout a period of four years. In that case given that they should be given jobs that are appropriate to the professional function their professional performance can be assessed. This is what this writer has always taken performance-based assessment to mean, a view that has some similarity with that of Norman (1985) who said that competence describes what a person is capable of doing whereas performance is what a person does in practice. In industrial terminology, it is the equivalent of performance appraisal where like medicine and teaching there is a long history of research and practice (e.g. Dale and Iles 1992). Not that the hard data of management give one any confidence in its assessment for as Mintzberg (2009) has written “how to manage it when you can’t rely on measuring it? […] is one of the inescapable conundrums of management” (p 159 and chapter). And that is the conundrum of education (Gage 1981).

Engineering educators were not largely bothered by these issues. Engineering was conceived to be the application of science to the solution of practical problems although it was not assessed as such. Educators sought answers to problems giving a single solution. It is increasingly recognized that assessments should derive from solutions to wicked problems. It is also understood that laboratory and project work contribute to the skills required by industry; the literature redounds with attempts to better relate them to practice (Heywood 2016), although Trevelyan’s (2014) studies suggest there is a long way to go. There is substantial interest in problem-based learning which has been well researched (Woods et al. 1997). This chapter is focused on engineering practice and its assessment where there is a need for studies of predictive and construct validity.

Until this decade, engineering educators have paid very little attention to professional practice although this is now being rectified (Williams et al. 2013). Throughout the second half of the twentieth century there has been a strong presumption among academics that university examinations predict subsequent behaviour. Equally at regular intervals industrialists and the organizations that represent industry have complained that graduates, not just engineering graduates but all graduates are unprepared for work (performance) in industry. To put it in another way—the examinations on offer did not have predictive validity for work. What is remarkable, is the persistence of these claims over a very long period of time (fifty years), in spite of a lack of knowledge by all the partners of what it is that engineers actually ‘do’. Nevertheless, from time to time during the past sixty years, attempts to find out what engineers do either directly or indirectly have been made. But first, can we learn from the past ideas, philosophies if you will, that can enable us to judge where we are in the present? It is the contention of this study that we can.

5.2 The Organization of Professional Work. Temperament and Performance

Studies of the impact of organizational structure on innovation and performance showed that often qualified engineers had poor communication skills. Engineers were required to speak several different “languages” i.e., of design, of marketing, of production, of people) (Burns and Stalker 1961). Organizational structures were shown to influence attitudes and values, and by implication performance to the extent of modifying competence although it was not perceived as such in a study by (Barnes 1960). It was posited that professional work organised in relatively open systems was likely to be more productive (innovative) than when organised in relatively closed (hierarchical systems). The findings continue to have implications for the preparation of engineers for management and leadership roles.

Immediately after the Second World War, university education in the UK was considered suitable for training engineers and technologists for R and D, but it was held that education and training for industry commonly provided by technical colleges needed to be enhanced by additional engineering science and maths, but that the art of engineering developed in industry should be retained and improved (Percy 1945). Apart from additions to content this would be achieved by sandwich (cooperative) courses developed to provide an alternative degree programme to those offered by the universities. For this purpose nine Colleges of Advanced Technology that were established to offer courses for an independently examined Diploma in Technology that would be equivalent to a university degree (Heywood 2014). Assessment was mostly an afterthought although there were one or two serious attempts to devise assessment strategies for the period of industrial training (Rakowski 1990). In practice there was a ‘curriculum drift’, that is to the creation of syllabuses in the mirror image of those offered by universities. Criticisms were made of these programmes by some distinguished industrialists (Bosworth 1963, 1966) that led to the discussion of at least one alternative model for the education of engineers for manufacturing engineering that was based on recent developments in the educational sciences (Heywood et al. 1966).

Throughout the 1960s the engineering profession was concerned with criticisms that scientists and technologists lacked creativity (Gregory 1963, 1972: Whitfield 1975). There were problems about how it should be taught and how it should be assessed. Later in the US, Felder (1987) listed the kind of questions that should elicit creative behaviour.

In 1962, an analysis of the end of year written examinations (five three-hour papers) taken by first year mechanical engineering students suggested that they primarily tested one factor (Furneaux 1962). Engineering drawing and coursework tested different things. It was suggested that if examinations were to be improved there was a need for their objectives to be clarified. Furneaux also sought to answer the question- were those who were tense, excitable and highly strung more likely to perform better than those were phlegmatic, relaxed and apparently well-adjusted? It might be predicted that extraverts would not do as well as introverts, for apart from anything else introverts tend to be bookish and academic studies have as their goal the development of bookish traits. Introverts work hard to be reliable and accurate, but in the extreme they take so much time on the task that they might do badly in examinations. In contrast extraverts might do an examination quickly which may be at the expense of reliability.

Furneaux found that the groups most likely to fail university examinations were the stable-extraverts, followed by (but at some distance numerically) the neurotic-extraverts. In this study the neurotic introverts did best. From this and other data, Furneaux argued that individuals who easily enter into states of high drive are likely to obtain high neuroticism scores: it is this group that is likely to obtain high examination scores. Similarly persons who have an extraverted disposition and at the same time a low drive level will do badly in examinations. An introvert with high drive will be able to compensate for relatively poor intellectual qualities whereas, in contrast, good intellectual qualities may not compensate for extraversion and low drive.

The more neurotic students did badly in the engineering drawing paper. Those who were stable did better. This, he argued, was because the task was so complex that optimum drive occurred at a low level. Furneaux found that the most common cause of failure was poor quality drawing which was of a kind that might be due to disturbing influences. Discussion with the examiner led him to believe that supra-optimal drive might have occurred, because there was some evidence of excessive sweating, lack of coordination and faulty judgement.

Irrespective of the fact that this was a small sample the finding that temperament may influence performance was found in other studies in the UK (Malleson 1964: Ryle 1969) and among engineering students in the US (Elton and Rose 1974). Given that that is the case it must also have a bearing on the way tasks are performed, which brings into question the predictive validity of a single defined competence. It goes someway to explaining Trevelyan’s (2010) finding that young graduates who had entered industry from their programmes preferred solitary work. Team work was seen as splitting the assignment into parts so that each one had something to do but on his own. Anything that required collaboration was seen as an interruption. Trevelyan reported that they did not see any evidence of collaboration. He argued that the structure of the education they had received contributed to these attitudes just as the continued reception of PowerPoint lectures diminishes listening skills. But could this desire to be alone, be a function of personality? For example, do those who want to work alone have a tendency towards introversion? Yet effective engineering practice as Tilli and Trevelyan (2008) have shown requires high order skill in ‘technical coordination’, which requires collaboration which in turn requires a high level of interpersonal skill: other studies of practicing engineers confirm this view. Moreover, the finding that communication is a significant factor in the work of the engineers is consistent over time. But the teaching of communicative skills to engineering students was in Trevelyan’s experience not done in terms of social interaction.

Development of skill in social interaction requires a high level of self-awareness and therein is the value of the peer and self-assessment of a person’s performance in interpersonal activities. Early attempts to get students to self-assess in engineering focused on the ability to evaluate work they had done in completing a project.

5.3 A Multiple Strategy Approach to Assessment in Engineering Science and Its Implications for Today

In 1969 partly in response to Furneaux’s criticisms of examinations in engineering, and partly in response to the ideas contained “The Taxonomy of Educational Objectives” (Bloom et al. 1956) a public (high school) examination in Engineering Science equivalent to at least first year university work in four year programmes was designed to achieve multiple objectives thought to represent key activities in the work of an engineer (Carter et al. 1986). The examiners had the prior knowledge that enabled them to design a new approach to assessment and the curriculum. Its underlying philosophy was that the ways of thinking of engineers were different to the ways of thinking of scientists even though engineers required a training that was substantially science based (Edels 1968). It was probably the first course in the UK to state specifically what attitudes and interests a course in engineering science should promote. The examiners believed that while not all the attitudes stated could be directly measured, it was possible to detect them in the way students tackled problems based both on the syllabus content and on coursework. The designers persuaded of the power of assessment to influence that philosophy and the changes they wished to bring about attempted to design examinations for the written papers that took into account the restraints within which engineers function in real-life situations, and they radically changed the approach to coursework and its assessment. It was found that the initial requirements overloaded the curriculum and overburdened teachers with assessment, and in consequence had to be reduced.

Eventually students were required to undertake laboratory work related to content, and keep a record of it in a journal. They had to select two experimental (discovery) investigations on different topics for assessment, and complete a 50 laboratory hour project. Rubrics were designed each of which assessed six key ability (competency) domains relevant to the investigations and separately to the project. These were preceded by six mastery assessments as appropriate to the investigations or the project. The first rubric was strictly criterion referenced (dichotomous scales). It was found that both students and teachers had difficulties with some of the items because there were no shades of grey. The key ability domains were then scaled (semi-criterion referenced) in a way that is now familiar.

The examination and coursework assessment procedure exemplify a “balanced” system of assessment described by Pellegrino et al. (2001), Heywood et al. (2007).

Dichotomous scales are not suitable for deriving grades that could be incorporated into a norm-referenced system of scoring as some critics of the scheme had envisaged. Neither are they necessarily valid. The immediate effect of moving to the new system in the year that followed was to elevate the distribution at the lower end of scale and so recognize some competence on the part of the weaker candidates. Over a fifteen year period the late D.T. Kelly, found that the coursework component discriminated well between the candidates, and was reasonably consistent from year to year (unpublished documents and Carter et al. 1986).

An analysis of some 100 papers on assessment published in the FIE Proceedings between 2006 and 2013 and in ASEE proceedings between 2009 and 2013 revealed only one study of the validity of competency statements. It seems that there is a temptation to assume that stated outcomes are valid. However, Squires and Cloutier (2011) found similarly to the engineering scientists that when the perceptions of instructors and students of the competencies addressed in a web-based campus courses in systems engineering programme were compared there were considerable discrepancies between them.

That examinations and assessment do not always test what it is thought they should test is also illustrated by the engineering science development. In order to test the skills involved in “planning” the written examination incorporated a sub-test which described a problem for which the students were required to present a plan for its solution. It was expected that this would cause them to repeat the skills they had learnt while planning their projects. There would, therefore, be a high correlation between the two marks. Unfortunately the Board’s research unit did not provide a correlation between marks for project planning and evaluation in the project exercise and those for the exam that was supposed to model these skills.

But the assessors were given the correlations between coursework and the other components of the examination and the lowest correlation was found to be between these two activities. Each of three repetitions produced the same pattern of results, and the factorial analyses suggested that the sub-tests were measuring different things as was hoped (Heywood and Kelly 1973). Similar results were obtained in each of three consecutive years. At first it was thought that this effect was because engineering design was not a requirement of the syllabus. However, a decade later when Sternberg published his Triarchic theory of intelligence another explanation became possible (Sternberg 1985). Sternberg distinguished between three components of intelligence: meta-components which are processes used in planning, monitoring and decision-making in task performance; performance components which are processes used in the execution of a task; and knowledge-acquisition components used in learning new information. Each of these components is characterized by three properties––duration, difficulty and probability of execution which are in principle independent.

It is evident that the project assessment schemes are concerned with the evaluation of meta-components. Elsewhere he calls them appropriately “executive processes”. We can see that a key difference between the project planning exercises and the written sub-test is the time element. The two situations required the student to use different information processing techniques. The written exercise is a different and new domain of learning for which training is required. In order for the skill to become an old domain, a high level of automatization is required so that the different processes in the meta-component are brought into play more quickly. That is to say they have at a certain level to become non-executive. The project and the written paper while demanding the same meta-components might be regarded as being at different levels in the experiential learning continuum. Some executive processing will always be required at the written paper level, and it is possible to argue that the task performance and stress which it creates are a more accurate reflection of the everyday activities of executives than the substantive project.

Notwithstanding the validity of this interpretation, more generally these investigations showed that when criterion and semi-criterion measures appear to have high face validity there is a need to ensure that there is congruence between student and assessor perceptions of the items. The measurement of performance is not as easy as it seems. Similarly it cannot be assumed that the goals that assessors have for multiple-strategy assessments are necessarily being met even when they also appear to have face validity.

5.4 The Development of Competency-Based Qualifications in the UK and the US in the 1980s and 1990s

Throughout the 1980s and 1990s in the UK and the US there were developments in the provision of competency-based qualifications. In the UK National Vocational Qualifications were introduced. An NVQ “is a statement of competence clearly relevant to work and intended to facilitate entry into, or progress in, employment and further learning, issued to an individual by a recognized awarding body” (Jessup 1991, p. 15).

“The two aspects of the statement of competence which the definition (above) goes on to add are significant, as is their relationship to each other. The statement leads on “performance”, which is of course central to the concept of competence, and places ‘skills, knowledge and understanding’ as underpinning requirements of such performance. This does not deny the need for knowledge and understanding, but does make clear that they, however necessary, are not the same thing as competence. This position has considerable implications for assessment” (Jessup, p. 16). The assessors of engineering science paid great attention to the student’s theoretical understanding and some teachers would have said too much. This remains a critical issue especially when successful projects seem to be based on inadequate understanding of the theoretical rationale!

Jessup writes: “Assessment may be regarded as the process of collecting evidence and making judgements on whether performance criteria have demonstrated he or she can meet performance criteria for each element of the competence specified” (Jessup. p. 18).

Jessup’s primary thesis was to argue that the outcomes-based approach that he and others had developed was applicable to all forms of learning. The system that emerged had five levels of competence and it may be argued that the Bologna levels have their origins in this kind of thinking. Among the techniques of assessment discussed at the time were records of achievement which in the past couple of years have been introduced in the UK Universities (HEAR 2014).

Tyler’s and Bloom’s approaches were rejected because they chose outcomes that could easily be assessed. However, engineering educators like medical educators are unlikely to assume that only those things that are defined are the only elements of competence. During this period a report from the Department of Employment included a taxonomy of outcomes for engineering education by Carter (1984, 1985) that embraced the cognitive and affective domains (Otter 1992). Attention is drawn to a Taxonomy of Engineering Education Outcomes that embraces both the cognitive and affective domains.

An alternative to the competency approach was advocated by the Royal Society for Arts and Manufacture (RSA) under the title “Education for Capability”. The basis of the project was action learning as a means of helping students learn how to apply their knowledge and skills. It argued that students should be able to negotiate their programmes of study, should learn through collaborative learning, and learn skill in reflection through reflection on their progress. Capability may be assessed by observing if students are able to take effective action, explain what they are doing, live and work effectively with others, show that they learn from experience be it their own or with others (Stephenson and Weil 1992: Stephenson and Yorke 1998). In contrast to the competency approach capability is holistic and broader concept than competency.

In the US, there was a movement to develop ‘standards’ in schools. In essence they were very long lists of outcomes. It was argued that helpful though they may be they were too long for teachers to contemplate covering. In 2000, The International Technology Education Association published a list of standards for technological Literacy. There has been interest in recent years in producing a corresponding list for engineering.

In 1989, the UK Employment Department introduced the Enterprise in Higher Education Initiative in universities with the intention that all departments in a university would arrange their curriculum so that it would develop the “personal transferable skills that were considered to be required by industry. They did not think it was necessary to add bolt on subjects, and a philosophy of assessment was suggested” (Heywood 2005).

In 1992, a report to the US Secretary of Labor argued that the high school curriculum was not equipping students for the world of work (SCANS 1992). To achieve this goal, students required to develop five work place competencies: handling of resources; interpersonal skills; handling information; thinking in terms of systems, and technology. The SCANS Committee believed that their goals could be achieved by adjusting teaching within the ordinary subjects of the curriculum and gave examples of how this might be accomplished. A weakness of the model was that it paid little attention to the “practical” areas of the curriculum like the arts and crafts and music.

The American College Testing program (ACT) was involved in developing tests for SCANS and the program attracted the interest of some engineering educators (Christiano and Ramirez 1993). Much earlier, ACT developed a College Outcome Measures Program (COMP) (ACT 1981; Forrest and Steele 1982) which was designed as a measure of general education. It was designed to help institutions evaluate their curricular and/or design learning activities which would help students obtain the knowledge skills and attitudes necessary for functioning in adult roles after graduation. These aims go beyond education for work and take into account more general aspects of living (Lotkowski et al. 2004).

Taken together there are remarkable similarities between the objectives of these different programmes and the concept of intelligence as derived from the views of experts and lay people by Sternberg (1985).

5.5 Studying Engineers at Work

Attempts to develop new curricular in engineering were and are criticized because they are based on models of what it is believed engineers do rather than actual studies of what they do. An early analysis of the work done by engineers a highly innovative firm in the aircraft industry in the UK had as its prime purpose the derivation of a general method for the determination of the objectives of training technologists and technicians (Youngman et al. 1978). A secondary purpose took into account factors such as satisfactory performance on the job, and organizational structure in order to show how work could be structured for training purposes. Fourteen engineering activities and twelve work types were identified. The work types did not match with textbook models of the engineering process. Significantly, no traditional manager work type emerged. One interpretation of the data argued that to some extent everyone in an engineering function engaged at one level or another in management. It was found that that age and job level were more significant variables than educational qualifications in terms of explaining differences in job descriptions. This analysis tended to support this view that as engineers grow older they tend to place increasing reliance on experience and reject the notion that training can be beneficial. It was suggested that over reliance on experience could impede innovation. A view of the firm as a learning organization was described.

The study did not result in a taxonomy but The Taxonomy of Educational Objectives was shown to be of value in the analysis of tasks done managers in a steel plant (W. Humble cited by Heywood 1970). It showed the importance of the affective domain. A survey by the Engineering Industries Training Board showed the importance of this domain since it found that 60 % of technologists time were spent in interpersonal activities thus confirming the importance of interpersonal competence. This finding was also repeated in a comparative study of German and British production managers (Hutton and Lawrence 1981). The similarities between the two groups were found to be much greater than the differences. A problem for the German engineers was dealing with critical incidents. A problem for British engineers was resources. The Germans tended to emphasize the technical whereas the British emphasized the managerial aspects of the job. In both cases the paradox was of jobs that were fragmented during the day but nevertheless coherent. In contrast to the earlier study where a relationship between status and morale had been found no such relationship was found in the British firms studied.

The findings of these studies support the work of more recent authors like Bucciarelli (1994), and Vincenti (1990) and in particular the studies reported in Williams et al. (2013) to the effect that engineering is far more than the application of science and a messy activity when compared with the search for truth. It is a social activity and because of that interpersonal competence is a skill to be highly valued.

5.6 Intellectual, Emotional and Professional Development

It is therefore of some consequence that the curriculum has tended to neglect the affective dimension of behaviour at the expense of the cognitive. During the 1980s some engineering educators engaged in discussions about these dimensions. Efforts were made to design curricula that would respond to Perry’s model of intellectual development (Culver and Hackos 1983; Marra et al. 2000; Pavelich and Moore 1996). A curriculum designed to follow the stages of this model should lead students from dependence to independence where they take responsibility for their own learning, and are able to solve ambiguous problems such as are posed by real-life engineering. To achieve this competence, it is argued that students need to be reflective practitioners but Cowan counsels that Schön from whom the idea of reflective practice comes, does not take account of “reflection-for-action.” (Cowan 2006). Cowan distinguishes between nine levels of reflection. Significantly he notes that reflection requires the ability to be self-aware and its key skill is to be able to self-assess, and it is this that distinguishes it from analysis which occupies so much of engineering education.

How engineering subjects are taught matters for the development of professional practice for such practice depends on judgment and judgement needs to be reflective. Work is both cognitively and emotionally construed for which reason it is incumbent on employers and employees to understand how organizations and the individual interact at this level, and they could do no better than look at the 1960 reports from Barnes (1960) and Burns and Stalker (1961). Some engineering educators have discussed critical thinking within the context of reflective practice (Mina et al. 2003). It is safe to conclude that reflective practice and critical thinking are best developed and assessed when they are built into the whole curriculum.

No discussion of the affective domain can ignore the writings on emotional intelligence. Whatever you may think about it as unitary concept it is clear that we have to handle its components every day (Culver 1998; Goleman 1994; Bar-On and Parker 2000). ‘Its’ development can be assisted in both education and training but it cannot be left to education alone because education cannot simulate the everyday situations that have to be faced in industry and its learning is part of the process of development.

Intellectual, emotional and professional development cannot be completed within college alone for a person continues to develop and will do so in response to any situation. For this reason industrialists have as much responsibility for the development of their engineers as do the colleges from which they come, and in these days of rapid turnover have an obligation to help them prepare for their next assignment.

5.7 Other Aspects of Outcomes Assessment

By 2000, the engineering curriculum had come to be and continues to be dominated by outcome approaches to assessment. The Taxonomy of Educational Objectives which despite criticisms of its use in engineering continues to influence some educators but its 2001 revision is beginning to be of interest (Anderson et al. 2001). Its use in computer science has been questioned (Highley and Edlin 2009). But there have been attempts to analyze questions set in examinations to evaluate the extent to which critical thinking and problems solving skills were being tested (Jones et al. 2009). But the judgments were based on face validity. That engineers read beyond the subject is demonstrated by a paper that describes the design of a service course using Fink’s (2003) taxonomy which at sight shows as much concern for the affective domain as it does for the cognitive (Fero 2011).

An attempt to reflect the variability of student performance that indicates what objectives students should attain at three different levels of performance has been reported by Slamovich and Bowman (2009). Outcomes assessment surveys have come to be used as a means of programme evaluation. An unusual one reported on the use of the MBTI (personality indicator) to assign students to teams (Barr et al. 2006). A substantial study of entrepreneurially minded engineers that embraced the affective domain compared practicing engineers with students. The practicing engineers demonstrated a different ‘footprint’. They had lower interpersonal skills, lower creativity, lower goal orientation and lower negotiating skills (Pistrui et al. 2012).

Prior to ABET EC 2000, a study at Iowa State University sought to find out how to assess ability (Brumm et al. 2006). Workplace competencies were defined as the application of knowledge, skills, attitudes and values and behaviour that interact with the task at hand, ‘Key actions’ required to demonstrate a competency were described. Among other findings, the authors suggest that from a combination of supervisor and self-assessments an e-career self-management system could be developed. While the study was based on experiential learning there is no mention of a taxonomy in this area. Neither the taxonomy presented nor the Iowa study makes any mention of the skill of ‘reflection’.

A European (University of Vienna) comparison of competencies valued by employers of computer science graduates and faculty showed that whereas teachers highly valued analysis and synthesis, basic general knowledge and research skills, employers valued capacity to learn, team competence, ability to work autonomously, problem solving, and interpersonal skills and ability to work with new technology. (Kabicher and Motschnig-Pitrik 2009). This raises questions of research design for it might be thought that the acquisition of research skills would necessarily involve problem solving.

European literature shows that irrespective of the term used an outcomes approach will generate lists that are common across the globe. One Spanish study reduced 37 core competencies to 5 (Chadha and Nicholls 2006—see also Tovar and Soto 2010). The three Viennese studies recognized the importance of the affective domain in the development of competencies in computer science. In one study, a technique for getting students to share their reflections with each other is described. Students in another study reported how the skills learnt had benefited their private lives as well as their professional (Motschnig-Pitrik and Figl 2007). It was found that it was not possible to provide for team projects and assume that all that could be learnt from them was learnt (Figl and Motschnig-Pitrik 2008). Distinctions were made between specific and generic task team competencies, and knowledge, attitude and skills competencies. These studies lead to the view that students could benefit from courses in learning-how-to-learn: Motschnig-Pitrik and Figl (2007) suggest that a course in ‘soft skills’ lead to an enhancement of these competencies as perceived by the students when functioning in teams.

One of the important findings of an evaluation of a competency-specific engineering portfolio studio in which students selected the competencies they wished to develop was that pedagogies that support individual choice involve a shift in the power dynamics of the classroom (Turns and Sattler 2012).

Related to the concept of the hidden curriculum is the idea of “accidental competencies” promoted by Walther and Radcliffe (2006). They are “attributes not achieved through targeted instruction”. It is a concept that has similarities with Eisners concept of “expressive outcomes”.

5.8 Project Work and Teamwork

Project work and teamwork are considered to be effective ways of developing the ‘soft’ skills (now called professional skills in the US) that industry requires. In addition carefully selected projects can help students work across the disciplines. It has been found that high levels of interdisciplinarity and integration may contribute to positive learning experiences (Froyd and Ohland 2005). However, it is suggested that many students are challenged by collaboration skills. Such skills have a large affective component and are context dependent as a function of an individual’s personality. Communication skills are particularly challenged when groups have different perceptions of the problem. Transdisciplinary projects are able to integrate the tools, techniques and methods from a variety of disciplines. Impediments to collaboration include disciplinary prejudice, unwillingness to listen and ask questions and lack of shared ideas (Lingard and Barkataki 2011). A key problem that is not fully understood is the level of knowledge required by each partner in the other disciplines involved. “Constructive Controversy” has been recommended as means of creating mutual understanding about a problem area (Johnson and Johnson 2007; Matusovich and Smith 2009). An experimental course based on constructive controversy led to the reminder that the pedagogic reasoning for the use of non-traditional methods of instruction needs to be explained (Daniels and Cajander 2010).

It has been argued that teamwork can contribute to the development of innovation skills and creativity. The research reported on the former suggested that heterogeneous teams were not more innovative than homogenous teams (Fila et al. 2011). One study that used design notebooks found that contrary to experience the most efficient use of the creative process was in the production phase and not in the conceptual design phase. In both areas of innovation and creativity there needs to be more research (Ekwaro-Osire et al. 2009). Importance has been attached to self and peer assessment. One study reported that students only made a fair job of self-assessment and this did not change with time. A voluntary system of self-assessment was used to support a variety of learning styles. Those who used the assessment system performed better than those who had not (Kemppaineme and Hein 2008). Another study found that whilst students could distinguish between good, average and poor performance they could not transfer their views to marking on a standards scale. A great deal of care has to be taken in areas where assessors are likely to disagree, as for example with oral presentations (Wigal 2007). There is increasing use of on-line peer review schedules, and one study reports writing instruction could be improved with the introduction of a peer-review component (Delson 2012).

The final section asks whether or not teamwork can be taught? Since participation in teamwork requires the utilisation of a number of skills that can be practiced the answer is yes. Similarly if students have some knowledge of learning they may also be helped to better their participation in team activities. Team activities should be seen as a preparation for industry in the sense that it should enable the person to better understand and evaluate the situation in which he or she finds himself or herself.

5.9 Preparation for Work

The belief that competence is a trait that individuals possess to one degree or another prevails in engineering education. It leads to the view that students can be prepared for work immediately on graduation by the acquisition of specifically stated competencies that can be taught. This belief been challenged on several occasions. For example a phenomenological study of engineers of work is reported by Sandberg (2000) that offers an alternative view of competency furthers this view. Competency is found to be context dependent and a function of the meaning that work has for the individual involved. Engineers were found to have different perceptions of work, and competencies related to the same task were found to be hierarchically ordered among them, each level being more comprehensive than the previous level. Attributes are developed as a function of work. It follows that they are not fixed, therefore firms will have to undertake training (or professional development) beginning with an understanding of the conception that the engineers has of her/his work. Professional competence should be regarded as reflection in action or understanding of work or practice rather than as a body of scientific knowledge.

Little has been known about how engineers utilize the knowledge learnt in their educational programmes at work. A study is reported by Kaplan and Vinck (2013) that affirms previous findings that engineers tend to use off-the-shelf solutions or start with an analogy of an existing solution for a different problem. The same authors noted that engineers switch between scientific and design modes of thinking. Yet another study reported the view that engineers who are contextually competent are better prepared for work in a diverse team.

These studies revealed some important competencies that are rarely discussed. For example, the “ability to see another person’s point of view”. This requires that the teaching of communication skills should be done in terms of social interaction. The key skill of “technical coordination” found by Trevelyan depends equally on the possession of communication and interpersonal skills.

Surprisingly studies of the use of mathematics by engineers are also shown to be related to the affective domain and influenced by sociocultural forces (Gould and Devitt 2013). Tacit knowledge is found to be important.

5.10 Conclusion

There are three groups of professionals with whom engineers can be compared––management, medicine and teaching. They differ from engineers in that in order to receive professional recognition they have to demonstrate they can perform the tasks required of them to a satisfactory standard. In the case of doctors and teachers, they receive substantial clinical training during their degree programme. There is no equivalent to this in engineering programmes unless they are structured as cooperative courses. Across the world the regulators of the engineering curriculum require an outcomes/competency-based approach to assessment (ABET; Bologna; Engineers Australia; Engineers Ireland; Engineering Council; Tuning).

Studies of engineers at work show the complexity of the activity of engineering. The idea that an engineer should be able to take on a professional role immediately on leaving college without some prior guided experience of working in industry is shown to be nonsense. It supports the view that engineering education may better prepare students for work in industry if it is structured on a cooperative basis. But the experience of industry has to be carefully designed. Alternative support for this view is to be found in Blandin’s (2011) study of a cooperative course that upgraded technicians to technologist status. He found that within the company the students developed competencies that were specific to their job. This writer takes from this study that the interaction between periods of academic study and industrial work help students to acquire professional competence in professional engineering that is not available to courses of the traditional kind that have no industrial contact.

It seems clear that the acquisition of a competence is through a developmental process and subject to the experience of the conditions imposed on it by the organisation. This has implications for way formative assessment is conducted and the way the curriculum is designed.

Since this chapter was written, ABET has made proposals for changing its requirements. These have not been greeted with aplomb by many engineering educators with the effect that a debate is now in progress. At the same time the literature on assessment continues to amass (Heywood 2016).

Issues/Question for Reflection

  • The interaction between periods of academic study and industrial work help students to acquire professional competence in professional engineering

  • Team activities should be seen as a preparation for industry in that they should enable the person to better understand and evaluate a given situation

  • Should engineering students be given courses in learning and assessment especially self-assessment?

  • What is the best way to assess competence in engineering?

  • Do industrialists have as much responsibility for the development of their engineers as do the colleges from which they come?