Keywords

1 Introduction

The teaching of ESP in Tunisia has long been characterized by the lack of professional standards and formal training on the part of the teacher who is generally deemed responsible for the whole learning and teaching process. Local and regional research studies of ESP in the Arab world where English is taught as either a foreign or a second language pervaded the scene of English language Teaching (ELT) and continued to prosper in response to the ever-changing demands of national and international businesses. Local studies conducted within the framework of M.A. and Ph.D. research projects into ESP learning and teaching revealed that ESP students need to acquire specific target skills, most notably the receptive reading skill, and that ESP courses should be geared towards their specific learning needs with respect to both learning and working environments.

Despite the government’s commitment to successive educational reforms, English competence remained beyond the desired level and no formal decisions have been made to revise and enhance the ESP situation. Yet, the future of ESP in Tunisia seemed to follow a wrong path despite its application on a wide range of academic domains at an advanced level namely business, economics, sciences, medicine, and engineering. Notwithstanding the bulk of local research on needs analysis, the fact remains that none of them has been extended to inform the development of ESP test content and task types. The focus of this chapter addresses this research gap in the Tunisian context.

2 Theoretical Background

The term evaluation has always been associated with any language teaching course, though rarely in the general English context (Hutchinson & Waters, 1987). Yet, in ESP as a goal-oriented undertaking par excellence, there exists two layers of evaluation which relate to both the learner and the course. Dudley-Evans and St. John (1998) advocated that evaluation plays a prominent role within the ESP process as is shown in Fig. 1.

Fig. 1
figure 1

Stages in the ESP process (Dudley-Evans & St. John, 1998, p. 121)

Figure 1 suggests that the design of the ESP course depends mainly upon needs analysis which in turn relates subtly to evaluation and assessment. These two latter are conceived to mimic the effectiveness or otherwise of the course at large. Hutchinson and Waters (1987) maintain that assessing the learner performance and evaluating the ESP course helps establish the existing educational cracks and gives feedback on how to repair them especially when it comes to course objectives and educational needs. Course evaluation is generally conducted by means of needs analysis which involves all the ESP stakeholders including the learner. Learner assessment in ESP concerns itself with the level of language proficiency needed to perform successfully in the target situation of language use whether academic, professional, or vocational.

Specific purpose testing is generally viewed as criterion-referenced unlike testing for general purposes, which is considered to be norm-referenced. While a norm-referenced test relates to the test taker’s “rank” in relation to others, a criterion-referenced test informs the test taker’s “level of performance” (Bachman, 1989, p. 248). This would typically imply in a way the theoretical orientation of general English testing and the practical consideration of ESP testing. This implication, however, does not deny that general English test developers eschew the practical outcomes of the test scores, nor does it signify that ESP tests lack the required theoretical basis. The fact remains that ESP test content and methods should be mainly derived from an analysis of a target language use (TLU) situation in terms of the tasks that test takers would typically perform.

In fact, the process of test development and validation in an ESP context is fundamentally a purpose-based of the ilk. Accordingly, an ESP teacher might devise a test to place the learner’s background knowledge in accordance with the ESP course (placement test), to ascertain that what is being taught is actually being learnt (achievement test), and to determine the degree of the learner’s proficiency level with reference to the required target language tasks (Hutchinson & Waters, 1987). The assessment criteria for the latter, however, are rarely based on a thorough analysis of the target language use situation. Instead, they appear to be primarily language theory-driven (Jacoby & McNamara, 1999), neglecting the specific situations in which language is to be used.

Administering language tests for a specific group of learners might present test developers with controversial issues, including test content’s specificity as well as test reliability and authenticity. Bachman and Palmer (1996) summarized and translated these issues into a model, which includes a set of test qualities accompanied by three principles believed to be the basis for the usefulness of a test. These principles apply mainly to the specificity of the purpose of the test, test takers, and the testing situation. Douglas (2000) advocated that there exists a continuum of specificity that is dictated by the intended purpose of a given language test. The question arises as to how specific the test should be, and to what extent it is applicable to all language use situations.

Another issue concerns the question of the interplay between assessing subject matter content knowledge and/or language background knowledge. With reference to a test for trainee doctors, Bachman and Palmer (1996) opined that in order to maintain the balance between language proficiency and content knowledge, a ‘knowledge test’ is in order. Cognitively speaking, it was believed that language and non-language ability account for the two sides of the same coin to the extent that they should be viewed as “inextricably linked” (Douglas, 2000, p. 39).

The relation between the language course and assessment has long been raised as a major concern in LSP classrooms. In an attempt to justify the lack of evidence for testing in ESP, Alderson and Waters (1983) referred to the influence that tests exert on teaching as the “washback” effect. They suggested that a test, whether good or bad, is intended to have a definite effect on the teaching process. Hughes (1988, p. 145) further claimed that because ESP is fundamentally needs-based, “teaching for the test (…) became teaching towards the proper objectives of the course”. The washback not only affects the learning and teaching process; but also gives feedback to teachers, course developers, and even learners on how to proceed within the ESP classroom.

Task authenticity has also been discussed among the major factors affecting the good quality or otherwise of an ESP test. Bachman and Palmer (1996) defined the concept as the extent to which TLU tasks correspond with the test tasks. In other words, had the language test been authentic, test takers would have found themselves engaged in a set of tasks that perfectly mirror the kind of tasks they would perform in the real-life language use situation. This is to highlight the critical relevance of needs analysis to evaluation in ESP. A useful test is by definition that which abides by the requirements of specific TLU domains which in turn is investigated by means of target situation needs analysis. Similar to authenticity is test task iterativeness which reflects the degree of correspondence between the test taker’s personal characteristics and both the test tasks and the TLU tasks. Those characteristics which relate to “the test taker’s language ability (language knowledge and strategic competence, or metacognitive strategies), topical knowledge, and affective schemata are in turn intended to interact with each other to serve the iterativeness of the task” (Bachman & Palmer, 1996, p. 25).

In short, evaluation in ESP is by no means done at random. Several dimensions contribute, whether directly or indirectly, to the effectiveness and utility of the test. In addition to reliability, authenticity, interactiveness, and impact; Bachman and Palmer (1996) identified the notion of construct validity as a defining feature of useful tests. The construct or the ability to perform specific tasks in the TLU domain should be evidenced by the valid interpretation of test scores and not by the validity of test scores themselves. The study will demonstrate how this quality is of paramount importance especially when it comes to ESP proficiency tests.

The main rationale behind the present research project lies in the established fact that no formal needs analysis study has been conducted on the ESP course of the target population. More precisely, it aims to identify and describe the second-year engineering students’ needs and attitudes toward the ESP course content in an attempt to set forth insightful recommendations for a more promising ESP course that best meets both the students’ academic and professional ‘expectations’. This goal-oriented study tries to raise the target students’ awareness about their current ESP learning situation and sensitize them to the language flaws and help them formulate a conclusive conception about what they really need to learn in English with regard to the four language skills.

The fundamental research problem upon which the study hinges pertains to the argument that if learners’ needs are not analysed and assessed, they may miss the goal behind the whole teaching process. Very much in the same vein, they may come to realize their failure to be operational in their field of work. Given the fact that there has been no empirical investigation of the students’ present and target needs, the present research intends to probe the following research questions:

  1. a.

    What are the students’ ESP learning needs and target needs?

  2. b.

    Do the students’ needs match the content of the ESP course?

  3. c.

    What are the implications of analyzing the needs of these learners?

3 Method

3.1 Context of the Study

The present study was carried out at the Higher School of Communication of Tunis (SUP’ COM), an engineering school founded in 1998 and placed under the tutelage of both the Ministry of Higher Education and the Ministry of Communication Technologies. As far as the distribution of ESP courses along the school educational cycle is concerned, they take place over the three first semesters with a total number of 42 h each semester. During the first week preceding the fourth semester, all the students get into an English language-training course that encompasses thirty hours of practice for the evaluation of students’ communication skills in their future professions.

With this end in view, the main assessment tests adopted to measure English proficiency at the school were the Test of English as a Foreign Language (TOEFL) and the Test of English for International Communication (TOEIC). SUP’COM was selected among many institutional settings since it is the first Tunisian higher education institution that took the initiative to incorporate such standardized tests in response to the important role allocated to the English language for the academic and professional success of its students. This perceived priority could also be manifested by the collective decision to render the TOEFL previously and the TOEIC currently an essential prerequisite for the graduates to receive their diploma.

3.2 Participants

Multiple sources were selected as the main participants in the present study: The target students, the ESP teachers, the subject-matter teachers, and the former students. Both questionnaires and structured interviews were given to a random sample with the aim of building as representative a picture as possible. The target students’ questionnaire was filled out by 153 respondents representing 85 % of the total number of second-year Telecom engineering students (180 students). As for the ESP teachers, the totality of the three female English teachers successfully responded to the questionnaire and structured interview items. A random sample of former graduates working for international companies in Tunisia and abroad were also invited to participate in the present study. The sample is composed of 50 employees aged between 25 and 29 years all of them holding the National Diploma in Telecom Engineering. As regards the subject-matter teachers, a questionnaire was sent by e-mail and only 28 out of 53 replied.

3.3 Instruments

The choice of data-gathering instruments is fundamentally influenced by Mackay (1978) who stipulated that there are two basic research tools for the collection of data, namely the questionnaire and the structured interview. These methods were described by Long (2005) as “deductive” measures whereby the researcher logically arrives at reliable conclusions about learners’ needs. They proved highly beneficial to elicit both qualitative and quantitative information for both the Target Situation Analysis and the Present Situation Analysis. Decisions about the design and the content of the questionnaire rested mainly upon Hutchinson and Waters (1987) theoretical framework for analysing both learning and target needs which highly relates to the Munbian model (1978) together with the McDonough and McDonough (1997) taxonomy of question types. The formulation of questions fell mainly under six main sets, namely factual, yes/no, multiple-choice, ranked, scaled, and open-ended. Drawing on the general purposes of the study, each question and sub-question concerned itself with a specific purpose on the basis of different “variables” under assessment (Oppenheim, 1992).

4 Data Analysis

For the purposes of triangulation, a combination of qualitative and quantitative research methods was used to glean and analyse data about the sample population. The idea behind collecting qualitative data relates to the search for patterns about the respondents’ perspectives, perceptions, opinions and explanations towards specific issues. The data was mainly gathered through open-ended questions the answers of which came up in a textual format that lent itself to thematic categorization. This kind of data was present in both questionnaires and structured interviews and was analysed using inductive reasoning whereby results were observed and presented in graphs and tables. Quantitative data was also used to serve statistical purposes.

The data which can be measured and counted was mainly collected by means of close-ended questions in questionnaire and structured interview formats with the aim of producing reliable and generalizable results. The latter were deductively analysed using descriptive and inferential statistics. The analysis process started with manually coding the responses to each questionnaire and interview item with the help of the statistical package (SPSS 20) which was also used to generate frequencies and percentages for each coded item among the multiple human sources involved in this study.

5 Results and Discussion

Overall, the analysis of the findings related to the perceived level of English supported the view that the second-year Telecommunications engineering students at SUP’COM need to develop their general proficiency in English. This conclusion was reached in view of the apparent discrepancy between the actual level of students as perceived by their ESP teachers and themselves as opposed to the target level as perceived by the graduates. Not only was there a dispute between the students (59 %) who reported that they are “good” at English and all ESP teachers who indicated that the vast majority of the learners have a “medium” level of English, but also between ESP teachers and graduates (72 %) who believed they enjoy a “good” level. What strengthened their belief was their reported TOEIC/TOEFL scores. This would imply that a “good” level of proficiency in English is at least required by potential employers in the target working situation. Thus, it suggested that students need to improve their current level of English which would in turn maximize their chances to be appointed to their potential jobs.

In general terms, there was a consensus among all the participants in the present study on the need for a combination of both specific and general English. However, it is interesting to notice here that 80 % of the graduates reported that they learnt “general English” in addition to one of the ESP teacher’s comment that her choice about the teaching of both Englishes applied only to the first-year English program and that second-year TOEIC preparation sessions could not be described as either general or specific. These two findings would suggest, as it was concluded before, that the ESP course is not specific enough or is too general considering the requirements of the target situation English use.

When asked about whether the ESP course met their needs as future engineers, half of the graduates disagreed. Again, it is worth noting that 34.4 % of the students expressed their dissatisfaction to the fact that the course meets their needs as future engineers. It follows then that both students and graduates are aware that their classroom learning needs in English differ from their needs at the work place and both seem to be crucial to the successful completion of their studies and future professional career. In response to the question pertaining to the degree of importance of English for their current job, this claim was largely illustrated by 78 % of the graduates who reported that the language was “very important”.

Bearing in mind that the ESP course in focus is a classroom-based English language training, the ESP teachers took the responsibility for the implementation of the TOEIC as one of the popular assessment tools to measure the students’ required proficiency in the target situation. In this respect, it seemed odd to find that all the participants in the present study approved the necessity and importance of the test as one of the learners’ job requirements except for 64 % of the graduates who indicated that the test did not constitute a necessary step towards the successful achievements in their current job. It is thus urgent to confirm the graduates’ opinion through recourse to the future potential employers’ corresponding point of view, which presented one of the big limitations that faced the researcher during this study.

In conclusion, the findings revealed that the target engineering students were expected to be engaged in real-world job chores where both accuracy and fluency are highly required. This conclusion was further emphasized when both students and graduates indicated that they mostly expected more “speaking” in the first place and “specific vocabulary” in the second place in response to their expectations vis-à-vis the ESP course. It follows then that the telecommunications engineering profession instrumentally requires English as an employability skill and this was obviously illustrated by the subject matter and ESP teachers’ emphasis on “practice” and “communication” when asked about what students should do in order to be operational in their field of work.

6 Implications and Recommendations

Similar to the centrality of NA is the importance of evaluating learners’ performance and the effectiveness or otherwise of the ESP course. Learners’ assessment and course evaluation are considered two sides of the same coin. Both were meant to contribute to the process of satisfying students’ needs and meeting the course objectives. As for learners’ assessment, Hutchinson and Waters (1987) distinguished between three different types of learners’ assessment characteristics of ESP, viz. placement, achievement, and proficiency tests. The ESP course offered to the second-year telecommunications students at SUP’COM opens a window of opportunity for them to receive an intensive language training course with the aim of obtaining the TOEIC certificate as a passport to success in their professional life. This test can be classified under both proficiency and achievement tests since it aims to measure the learner’s progress until s/he demonstrates his/her ability to meet the demands of the work place by obtaining the required score.

Yet, what seemed to be neglected by ESP teachers was the importance of the placement test as a measurement tool that helps specify the learners’ initial background knowledge. This negligence could be justified by the ESP teachers’ suggestion to work with homogeneous mini-groups instead of dividing the total number of students by the available three teachers. Yet, course evaluation was by far the most important form that caught most of the attention of the ESP teachers. As the present study revealed, ESP teachers were aware of their students’ learning and target needs. This awareness generated their collective decision to substitute the Test of English as a Foreign Language (TOEFL) with the more purpose-oriented TOEIC. In general, specific purpose language testing has been a subject of considerable debate particularly in relation to general purpose language testing. Notwithstanding these misconceptions, the fact remains that ESP tests differ from those of General English (GE) in that they should be language-specific, content-related, and most importantly, purpose-based.

In an attempt to promote the practice of ESP testing at SUP’COM in particular and in any educational institution in general, a number of suggestions and recommendations are offered based on the main conclusions drawn from this large-scale research project. Firstly, it is highly recommended that ESP testing be based first and foremost on well-founded needs analysis. In view of the significant role English plays at the target work situation, the learners’ real world needs especially communication needs should be given more prominence and priority. The testing of ESP should reflect the way English is likely to be used in terms of activities, tasks, modes, channels, identity of communicators, setting of communication and the like.

Second, as far as proficiency tests are concerned, there should be crystal clear purposes for which test developers administer the test and test takers sit for the same test. Raising awareness of the intended purpose of the test helps maximize its benefits both on the learners’ involvement and responsibility towards their own learning and the teachers’ appreciation of test scores. This would also affect the content, the required level of proficiency, and the value of the test.

Finally, yet importantly, testing in ESP should be viewed as a two-fold procedure, i.e., assessing the learners’ ability and providing feedback accordingly. Test scores or outcomes should be analysed regularly and conventionally in order to ensure test validity and reliability. This needs in turn be set in view of the changing nature of the international job market demands as well as the individual value judgments of the key stakeholders of the ESP practice.

7 Conclusion

The purpose of this chapter is twofold. It seeks to establish the link between ESP evaluation and needs analysis on the one hand, as well as language testing and language teaching on the other. The main concern of ESP has always been the use of language and not the language per se. The teaching of ESP proved to involve much more than adapting the course content to the specific purposes and the communicative needs of the learners. Rather, it entails a variety of evaluation procedures to ascertain the learners’ ability to perform predetermined communicative tasks in the target situation. Language testing in ESP is also a matching process, which is meant to offer feedback on both the performance of the learners and the effectiveness of the course. The main argument relates to the fact that ESP assessment abides by specific criteria that should be derived from an analysis of the TLU context, most notably authenticity and reliability.