Introduction

The German higher education system is currently facing several challenges, especially with regard to the introduction of the bachelor/master degree system, the shift from input-oriented steering mechanisms and curriculum design to outcome orientation, the development of higher education institutions toward lifelong learning organizations, the improvement of (international) competitiveness, and the insufficient number of higher education graduates. At the same time, the focal issues of the past decades continue to exist: dropout, inequality, mismatch between resources and number of students, increasing heterogeneity of new entrants into higher education, and the relation between higher education and work.

Although several empirical studies addressing these topics have been carried out or are currently being conducted and although the amount of research on higher education students and graduates is quite impressive, some essential research desiderata remain: First, while competence modeling and assessment have become common in school education, they are rare in higher education. Second, longitudinal data that track a cohort of first-year students and follow them up through the course of studies until their entry into employment do not exist. Consequently, it was not possible so far to observe the decision processes and educational developments of higher education students in a longitudinal perspective.

Within the conceptual and measurement framework of the National Educational Panel Study (NEPS), the study “Higher Education and the Transition to Work” (stage 7) addresses research questions that are highly relevant to higher education research and policies. It aims to generate new knowledge about the process of competence acquisition in higher education and the early occupational career of graduates in terms of the interplay between different competence domains and the impact of learning environments located at various levels and endowed with different properties. The determinants of study success, transitions, and educational decisions during the course of studies, and the occupational outcomes of higher education graduates are also of crucial interest. In this context, special emphasis is given to the role that competencies, credentials, and social as well as individual characteristics play for higher education outcomes and educational selectivity.

The study design and the survey and test program provide the possibility of addressing research questions that have not yet been answered satisfactorily. In Sect. 17.2, we outline these research questions and also describe special features of the survey and test program. A major part of the data collection is, of course, identical or comparable with that of other stages of the NEPS. Because the higher education stage examines topics that are particularly relevant to the higher education system, we, however, also develop and employ cohort-specific instruments.

The higher education stage tracks a cohort of new entrants into higher education through their education and beyond. A summary of the sample characteristics (for details, see Chap. 4, this volume) is given in Sect. 17.3 that also describes the scheduling, frequency, and modes of data collection.

Finally, in Sect. 17.4, we discuss some problems encountered during our work and give an outlook on future work to be done.

Main research issues

The key research areas of the higher education stage center on the overall questions of the NEPS. They are about competence acquisition and competence development in formal and nonformal/informal learning environments; educational decisions and transitions alongside their determinants and consequences; and monetary as well as nonmonetary returns to education.

As regards competencies, the study “Higher Education and the Transition to Work” will not just cover those domains that are assessed in all educational stages of the NEPS and, therefore, constitute a common core of competence assessment (e.g., reading and listening comprehension, mathematical literacy, aspects of metacognition, and social competencies; see Chap. 5, this volume, for more details). It will also collect data on subject-specific competencies (see Sect. 17.2.1). The advantage of assessing different competence domains separately lies in the possibility of analyzing the interplay and relationship between them and answering questions such as how important is mathematical literacy for building up subject-specific competencies, to what extent do different competencies contribute to academic success, what constitutes professional action competence, and to what extent do the level of acquired competencies, credentials, social origins, social and cultural capital, and personality traits influence the transition to work and professional success.

Of course, the question regarding which factors facilitate competence acquisition is a key issue for the NEPS, too. In this regard, the learning environment is of particular interest (see Chap. 6, this volume, and Sect. 17.2.2).The central research question here, however, is not just concerned with the relevance of different learning environments for competence development. An equally important research issue refers to the impact of contextual conditions on educational decisions and career development. Because the formal learning environment is particularly amenable to policy interventions, the NEPS pays special attention to this context dimension. It is, however, also argued that many competencies are more likely to be acquired outside formal learning processes, for example, in the context of civic engagement and practical work experiences. Therefore, it is necessary to include these learning environments and systematically examine what role nonformal/informal and formal learning environments actually play.

Concerning the key issues of NEPS pillar 3 and pillar 4, that is, educational and career decisions and their determinants (especially a migration background in the case of pillar 4), the study “Higher Education and the Transition to Work” focuses on dropping out, entering a master’s program, starting a dissertation, and entering employment (see also Chap. 7 and 8, this volume; another important decision—the transition to tertiary education—will be primarily addressed in NEPS stage 5). Dropout remains a major concern of higher education institutions and policy. Therefore, it will be an important topic for the higher education stage. Although it is true that research on dropping out is not scarce, the NEPS opens up new research perspectives, because it offers the unique opportunity to observe the student departure process in a longitudinal perspective over a long period of time and to test alternative theoretical assumptions about the process and determinants of dropping out (see Sect. 17.2.3).

According to Astin’s (1973) taxonomy of higher education outcomes (as cited in Pascarella and Terenzini 2005, pp. 6–7), basically four different clusters of outcomes can be distinguished: (a) cognitive-psychological (e.g., subject-matter knowledge); (b) cognitive-behavioral (e.g., educational and occupational attainment); (c) affective-psychological (e.g., attitudes); and (d) affective-behavioral (e.g., civic involvement, health behavior). As described in Chap. 5 and Chap. 9 in this volume, each of these outcome areas will be addressed in the NEPS. In stage 7, special emphasis is placed on labor market returns—both economic (income) and noneconomic (e.g., risk of unemployment, match between education and job). In this regard, it is particularly interesting to analyze not only the impact of different competencies on labor market outcomes but also the significance of credentials and social as well as personal characteristics. Of course, other nonmonetary benefits of higher education such as civic behavior, subjective well-being, and the state of health are not ignored either (see Chap. 9, this volume). These benefits, however, are interesting not only from the returns perspective but also as an important learning opportunity (e.g., civic involvement) and as an individual characteristic (health, satisfaction) that may influence competence development, educational attainment, and educational and career decisions.

Apart from these research areas, stage 7 of the NEPS pays special attention to particular groups of students that have previously been neglected in higher education research or are of special interest to educational policy: By oversampling students at private institutions of higher education (see Chap. 4, this volume, and Sect. 17.3.1) and comparing them with students at state-run institutions, the NEPS will yield the first-ever results on the social selectivity of private institutions, the quality of learning and instruction, and the outcomes and consequences of private higher education. By including the entire population of first-year students without a school-leaving certificate qualifying them for higher education (so-called “nontraditional students”; Schuetze and Wolter 2003), it will be possible to advance the state of research on this group, especially with respect to their prerequisites, needs, and educational and occupational careers. By oversampling teacher training students, the NEPS will be able to provide detailed large-scale data on what is considered to be a key profession for the quality of school education.

Some of the above-mentioned research areas, namely competencies, learning environments, and educational decisions—especially dropout—will be discussed in more detail in the sections that follow.

Competencies in higher education and their measurement

One of the main goals of the NEPS is to assess the development of competencies over the life course. As explained in Chap. 5 in this volume, the following competencies will be measured in all educational stages: domain-general cognitive functions (e.g., fluid intelligence), domain-specific cognitive competencies (i.e., competencies that are domain-specific in early educational stages but become cross-curricular in later life, e.g., mathematical literacy), metacompetencies (e.g., procedural and declarative metacognition), and social competencies. Apart from these competencies, the higher education stage also addresses subject-specific competencies, that is, competencies that refer to a particular field of study. On the one hand, we shall use self-report instruments that are applicable to the whole sample of higher education students. On the other hand, we shall employ a test of subject-specific competencies for students of business administration, which will be administered online at the end of the study program. In future cohorts of the NEPS, we shall include tests for additional subject areas. The instruments for measuring subject-specific competencies are being developed by the higher education stage.

Self-assessment of subject-specific competencies

The decision to also collect self-report data on disciplinary competencies of higher education students is based on several considerations: First, whereas tests of the domain-specific knowledge, skills, and competencies of higher education students hardly exist in Germany and still have to be developed, self-assessment instruments are more common. Existing questionnaires, therefore, can serve as an orientation and as a basis for constructing a suitable instrument. Second, self-report instruments are relatively economical in terms of administration, time, and money, and they can be administered to a large sample at low costs. Third, although self-assessments are criticized for being unreliable and invalid, several studies have found a systematic correlation between self-rated competencies and alternative measures of the same construct (see, for an overview and references, Braun et al. 2008). Fourth, when self-report data on competencies and data from achievement tests are collected simultaneously, it is possible to test the validity of the self-assessment instrument.

For the NEPS, a self-report questionnaire on discipline-specific competencies of higher education students has to meet several requirements. First, it should be based on a theoretical model of competence and competence development. Second, because it is intended to employ the instrument across various fields of study, it should also not be too specific and must be applicable to many subject areas. Third, the questionnaire should pertain to the study program as a whole and not to single courses.

Because none of the existing questionnaires satisfies all conditions, a new instrument tailored to the specific needs of the NEPS has to be developed. As the theoretical basis, we chose Bloom’s taxonomy of educational objectives (1956) in the revised version of Anderson and Krathwohl (2001). This taxonomy generally describes cognitive learning outcomes of higher education students in various subject areas. It distinguishes different levels of complexity and, within these levels, different cognitive processes. And because the taxonomy is often used to describe learning outcomes of modules and study programs in Germany, it ensures the curricular validity of the newly constructed instrument.

For item development, we used established instruments like the “Berlin Evaluation Instrument for Self-Reported Student Competencies—BEvaKomp” (cf. Braun 2007, Braun et al. 2008) and the “QS2,” a questionnaire developed for the evaluation of the master’s program “Human Factors” at the Technical University of Berlin (cf. Steinbach 2009). We selected appropriate items from both instruments, partly modified them and added new ones in order to cover all dimensions of the taxonomy. To ensure content validity and applicability of the items across fields of study, we conducted interviews with lecturers and professors in various subject areas and at different types of higher education institutions. These revealed that the questionnaire, by and large, is appropriate for many disciplines (e.g., humanities, social sciences, mathematics, physics, and computer sciences). But it does not adequately reflect desired learning outcomes in arts, design, and architecture. Thus, the questionnaire has to be modified for creative fields of study. The revised questionnaire will then be thoroughly examined in cognitive interviews in order to explore how students perceive and interpret questions and answers, to identify potential problems, and improve the instrument. The final step will be to test the items with a larger sample of students in an online survey.

Test of subject-specific competencies in business administration

Subject-specific competencies in tertiary education are considered to be learning outcomes that result from studying a particular subject. Although their acquisition is influenced—to a varying degree—by domain-general cognitive abilities and domain-specific cognitive competencies like mathematical literacy and reading comprehension, such subject-specific competencies possess further properties: They are based on the particular disciplinary and methodological content constituting the core elements of a subject, and they consist of an integrated set of knowledge, abilities, skills, and attitudes that correspond with and fit into typical professional contexts and demands.

The first field of study to be included in the testing of subject-specific competencies is business administration. This choice is based on the quantitative importance of this subject area, the comparatively advanced implementation of the Bologna Process, and the expectation that a consensus on core competencies can be achieved. Because there is still no well-evaluated assessment instrument available that allows testing across all types of higher education institutions (universities and equivalent higher education institutions, universities of applied sciences, and private higher education institutions), a new test has to be developed.

The test is designed for application at the end of the bachelor study program and focuses on the assessment of cognitive competencies for at least three reasons: First, an adequate description of subject-specific competencies without an explicit consideration of the cognitive dimension, especially different types of knowledge, is probably impossible (Achtenhagen and Baethge 2007). Second, by distinguishing cognitive and noncognitive competence components systematically and assessing them separately, it becomes possible to examine their interplay. Third, although several studies and expert interviews have pointed to the importance of specialized personal, social, and systemic competencies, and these “key” competencies are increasingly viewed as a desirable or even necessary outcome of higher education (see, e.g., the German Qualification Framework for Higher Education Degrees; cf. Gehmlich 2009), the chosen administration mode of online testing makes it difficult to measure noncognitive dimensions.

When developing a competence test for higher education students, one has to decide whether the test should refer to learning outcomes as defined by the curriculum, whether the instrument should also take into account those competencies that are required in the labor market and defined by employers (which are not necessarily identical with curricular objectives), or whether both perspectives are to be covered. With the aim of ensuring curricular validity of the test, we chose a curriculum-oriented approach. We therefore conducted interviews with experts in the field of business administration and economics, and analyzed the descriptions of study programs and modules. The explorative interviews showed the importance of subject-specifically shaped generic competencies such as social and personal competencies (e.g., combined in “leadership competence”) as well as systemic competencies (especially problem-solving abilities). However, according to the experts, the possibilities of teaching these competencies—alongside or together with specialized knowledge—are very limited within a bachelor program.

The descriptions of study programs and modules were used to identify subject domains that should and could be covered by the test. To this end, the curricular information was categorized according to the classification of business administration into six functional areas proposed by Haunerdinger and Probst (2006). In the first analysis of 26 obligatory parts of study programs from 18 universities and 8 universities of applied sciences, it was possible to identify essential core curricular components common to most study programs. This core curriculum involves “accounting,” “marketing,” “finance and investment,” and “management and organization,” and it makes up nearly 30% of the whole study program of the examined courses. Because of this finding and as a result of discussions with experts in the fields of business administration and test development, we decided to primarily assess declarative and procedural knowledge mainly in the three domains “accounting,” “marketing,” and “finance and investment.”

In order to construct test items, we reviewed existing instruments, for example, the “Business Administration Knowledge Test” (BAKT; cf. Bothe et al. 2005) and the “Wirtschaftskundlicher Bildungstest” (WBT (Test of Economic Literacy); cf. Beck et al. 2001), and collected examples of written tests from different higher education institutions. The latter will be analyzed to find commonalities across institutions regarding task content, task format, and competencies required to solve the tasks (e.g., declarative knowledge or knowledge in so-called complementary sciences such as mathematics or statistics). Subsequently, item prototypes will be developed and then sent to experts, who will evaluate the task difficulties and modify the items according to specified criteria. The resulting items will be improved by cognitive interviewing, followed by developmental studies and a pilot study in 2013. In 2014, the final instrument will be administered as an online test in order to access all students of business administration in the sample—regardless of whether they are studying abroad or have finished their bachelor study course.

The results of this test in business administration will enable us to analyze the interrelation with generic competencies. Later on, when the target persons have completed their degree course and entered the labor market, the relevance of subject-specific competencies for “employability” and labor market outcomes can be analyzed.

Learning environments: conceptualization and measurement

Alongside the formal learning environment of higher education, nonformal/informal learning environments are relevant factors for competence development (cf. Erpenbeck and Sauer 2001). In the higher education stage of the NEPS, practical work experiences—be they study-related or not, be they internships or student employment—are of special relevance, because most students in Germany are engaged in some kind of employment (cf. Isserstedt et al. 2010). The conceptualization of practical work experiences—and other nonformal/informal learning environments—is based on the same grounds as the formal learning environment (see below and Chap. 6, this volume). We decided, however, to focus on the “challenge” dimension and to use Hackman and Oldham’s (1975) job characteristics model and existing questionnaires based on this model as a starting point for item selection and adaptation. Our approach can be justified because the job characteristics model captures the quality of learning opportunities, which, in turn, is linked with competence development (cf. Richter and Wardanjan 2000; Wieland 2004).

The main emphasis of this section, however, lies on higher education institutions as formal learning environments that can be viewed as a focal point for students’ competence development (cf. Cabrera et al. 2001).

Research on learning and instruction has identified four key dimensions of the quality of education in schools: structure, support, challenge, and orientation (cf. Klieme et al. 2006; Radisch et al. 2008). In the NEPS, these dimensions are referred to as SSCO and serve as the theoretical basis for measuring the process quality of any learning environment (see Chap. 6, this volume).

As in other educational settings, the structural dimension in higher education refers to the degree of clarity, clear and transparent organization, stability, and safety of learning opportunities (e.g., rules, learning conditions, study requirements, and expectations). Support involves helping students to develop competencies, to gain a certain degree of autonomy, and to cope with study requirements or social integration. Challenge primarily means cognitive activation. This is achieved by, for example, demanding, open tasks, but it also requires an adequate structuring of the instructional process. Finally, the dimension of orientation refers not only to existing norms and values of the teaching staff and students, but also to the self-image of a higher education institution, a department, or a study program. In the context of higher education, this dimension includes, for example, practice orientation, research orientation, and the store set on internationalization. Apart from these aspects of the process quality of learning and instruction, structural characteristics or general conditions of the learning environment—for example, the student-teacher ratio in a study program or the provision of libraries, books, and ICT facilities—also have to be taken into account (cf. Radisch et al. 2008).

As regards the institutional context of learning in higher education, we adopt a multilayer perspective on the learning environment (cf. Dippelhofer-Stiem 1983; Fend 2008; Wosnitza 2007): (a) the immediate learning context (courses), (b) the study program, (c) the department, and (d) the higher education institution as a whole. Although the NEPS will collect data on courses taken by students, the information will not refer to single courses but will give a generalized description of courses offered by the study program in a given period of time. As a consequence, the lowest level and smallest unit of analysis is the study program and not the courses. Although it is known that when measuring educational quality, it is preferable to look at courses or even instructional sequences separately, we had to choose a more general perspective referring to all courses taken. However, we follow Radisch et al. (2007) and hold the view that this approach yields an overall measure of the quality of learning opportunities in a study program.

The formal learning environment of higher education is embedded in a broader local, regional, and societal environment that is characterized by, among other things, specific economic structures and employment prospects. In general, these contextual levels are assumed to be less relevant to educational decisions and competence development during higher education, but they belong to those factors that shape the transition from higher education to work and the subsequent occupational career. Although the SSCO model will be applied at all levels of the formal learning environment—albeit with a varying emphasis on different dimensions—the broader context will be captured only with regard to structural characteristics.

According to the “opportunity-use model” proposed by Fend (2008), the quality of learning opportunities is only one side of the coin. The other side is how students perceive the learning environment and how they use learning opportunities. This “user side” includes both qualitative and quantitative aspects. The former refer to individual characteristics of the students such as cognitive competencies, motivation, and interests. The latter relate to the time spent on studying including time devoted to self-learning, which forms a significant part of the time use of higher education students in Germany (cf. Isserstedt et al. 2010). The “opportunity-use model” adds another layer to the multilayered structure of learning environments: the individual who is located at the lowest level and whose characteristics will be measured in the NEPS (for the measurement of motivation, interests and personality, see Chap. 10, this volume).

As regards data collection, the primary source of information on the learning environment will be the students themselves. Apart from the limited feasibility of other approaches, there are two main reasons for this subjective perspective: On the one hand, it is not only the “objective” learning environment that matters, but also students’ perceptions that influence competence development and educational decisions. Insofar, the learning environment as perceived by the individual can partly be considered as an individual characteristic. On the other hand, it follows from several studies that it is possible to obtain a relatively unbiased and valid picture of the learning environment and the quality of teaching and learning by averaging individual answers across courses (cf. Klieme and Rakoczy 2003; Teichler et al. 1987). The deviation from the mean can then be interpreted as an individual characteristic of the students. In addition to the information gathered from the target persons, we shall analyze documents that are easily accessible (Internet, databases, statistics) and can be examined with comparably little effort.

There is no questionnaire that covers the SSCO model completely and that is tailored to learning environments in higher education. Therefore, we reviewed existing survey instruments used in higher education, selected and adapted items, and also constructed new items. The resulting questionnaire was tested in a developmental study (see Sect. 17.3.3). Confirmatory factor analysis showed that all dimensions of the SSCO model can be found empirically and that, at the same time, the number of items can be reduced by 50%.

Educational decisions: dropout as an example

A lot of studies in sociological educational research explain differences in educational outcomes by rational decision processes (cf. Boudon 1974; Breen and Goldthorpe 1997; Erikson and Jonsson 1996). However, crucial parts of decision processes in higher education in Germany still remain a black box because the available data is not sufficiently detailed in these respects. In contrast to the assumption that actors decide rationally on their further educational career, some authors emphasize that individuals are rather influenced unconsciously by class-related norms, values, and beliefs (cf. Gambetta 1987).

The higher education stage of the NEPS intends to collect data that will enable researchers to apply and test both approaches. This implies a careful measurement of, on the one side, aspirations and attitudes to specific educational options, and, on the other side, core concepts such as perceived costs, returns, subjective probabilities of success, the time horizon, and the motive of status maintenance (for details, see Chap. 7, this volume). Preconditions that limit rationality will also be taken into account, for example, the degree to which a person is informed about different educational options. Because higher education students with a migration background may well be less well informed about the German educational system, this might be one of the reasons for differences in educational decisions (cf. Kristen et al. 2008).

Our study focuses on transitions that can be considered to be crucial for inequality in higher education because they have a strong impact on the further career. One of these transitions is the decision to leave the higher education system before graduation. Although more than one-fifth of a cohort of first-year students drop out of higher education in Germany (cf. Heublein et al. 2009), important questions on the mechanisms of dropout still remain to be studied with nationwide representative longitudinal data. Furthermore, a burning question in educational research is how the far-reaching structural changes triggered by the Bologna Process will impact on future dropout rates among higher education students.

Alongside a rational choice approach to dropout, we decided to also collect data on students’ social and academic integration. According to Tinto (1975), the level of integration and the degree of congruence between the individual and her or his social environment (“social integration”) influences the inclination of a student to drop out of higher education. In other words, the more students participate in peer group associations or extracurricular activities and interact with the faculty and administrative staff, the higher the likelihood that they will continue their studies. Academic integration pertains to the identification with the norms of the academic system. The hypothesis is that the lower the level of achievement and compliance with the norms of the academic system, the more a student is prone to drop out.

The decision to also measure social and academic integration was made for several reasons: First, these concepts include aspects that are not equally accounted for in rational choice models of educational decisions. Second, we would like to provide the opportunity to test—for the first time in Germany—Tinto’s widely adopted conceptualization of dropout versus rational choice approaches on the basis of a nationally representative sample. Third, the panel design of the NEPS study renders it possible to model dropout as a process in time and to apply Tinto’s approach to longitudinal data.

As there is no standard measure for social and academic integration available in Germany, we adapted the “Student Adaption to College Questionnaire” (SACQ; Baker and Siryk 1999), an instrument based on Tinto’s conceptualization of social and academic integration that is often applied in the United States. As alternative measures, we chose different standard questionnaires covering subdimensions of social and academic integration, for example, “academic commitment” (Grässmann et al. 1998) or the subscale “social integration” of the SMILE questionnaire (cf. Schiefele et al. 2002). The selected and translated SACQ items were first tested in cognitive interviews and then revised before being included in a developmental study together with the aforementioned alternative instruments. A validity analysis suggested that it was best not to rely on the SACQ scales, but to use other measures that proved to be more reliable and valid. A final test of the once again revised questionnaire on social and academic integration will take place within the pilot study (see Sect. 17.3.3).

Study design

The higher education stage of the NEPS longitudinally follows a cohort of randomly selected new entrants to higher education during their student days and beyond. Data are collected two or three times a year using different modes of data collection. The following sections briefly comment on selected sample characteristics and discuss the modes of data collection and instrument development.

Specific features of the sample

As described in Chap. 4 in this volume, the target population of NEPS stage 7 consists of new entrants into higher education who enroll at a German institution of higher education (universities and equivalent institutions, colleges of art and music, and universities of applied sciences) for the first time and study for their first degree in the winter term 2010/2011. Within the randomly drawn sample (for details of the sampling strategy, see Chap. 4, this volume) students at state-approved private higher education institutions and teacher training students are oversampled. The disproportional sampling of these two groups is based on the following considerations: (1) In Germany, the private sector of higher education is still small, albeit expanding. So far, the issue of private higher education institutions has not been addressed systematically and is widely ignored in higher education research (cf. Sperlich 2008). As a consequence, much is assumed but little is known about either the social selectivity of private institutions, the quality of learning opportunities, or the outcomes and consequences of private higher education. By oversampling students at private institutions the NEPS will be able to close this research gap. (2) The quality of schools and, consequently, the competencies of the teaching staff and the training of teachers continue to be a central topic not only in educational research but also in educational policy. It is true that research on teacher and teacher training has experienced a renaissance in the last decade (cf. Rothland and Terhart 2009) and that several studies are under way to examine, for example, the development of teachers’ professional competencies. The NEPS, nonetheless, offers a unique opportunity to add results from a large-scale panel study to the existing body of research. The oversampling of teacher training students makes it possible to distinguish between students who prepare for the teaching of students at different levels.

Apart from this random sample, the entire population of first-year students without a school-leaving certificate qualifying for higher education (so-called “nontraditional students”; cf. Schuetze and Wolter 2003) is included in the NEPS. Widening access and allowing nonconventional routes to higher education are a topical issue in higher education policies, and increasing efforts are being made to open up access to higher education for those who do not possess a qualifying school certificate. Although this “third route to higher education” has been broadened, the number of students who decide to take this route is still small. And although much attention is being paid to nontraditional students, little research has been done so far, and little is known about the experiences, prerequisites, and educational and occupational careers of this group. The full survey of nontraditional first-year students will contribute to closing this research gap.

Modes of data collection

As already mentioned, several modes of data collection will be employed: self-administered questionnaires, computer-assisted telephone interviewing, online surveys, group administered tests in classroom settings, and online tests. The decision to rely more heavily on Web-based assessment and surveying adds another innovative element to the NEPS and has, inter alia, the advantage that the growing group of internationally mobile panel members can be surveyed more easily.

Concerning frequency and timing of the panel waves, up to three, but mainly two short surveys or tests will take place every year. The reasons for choosing this survey design are manifold: On the one hand, the maximum acceptable length of an online survey is considerably shorter than that of a questionnaire survey or a personal interview. Therefore, we had to split up the questionnaire. On the other hand, those parts of the survey program that are identical in several stages of the NEPS should be administered in the same mode in order to avoid mode effects and ensure comparability. We therefore use telephone interviews to measure these “cross-phase” constructs, and online surveys to measure stage-specific constructs. In addition, the online methodology is not equally suited for all question formats and measurement instruments. By conducting more, but shorter interviews and surveys we, lastly, expect to make participation in the panel study more attractive and, thereby, to reduce panel attrition.

The NEPS is implementing several measures to obtain a high participation in the first panel wave and to reduce panel dropout. As a result of a method experiment, which tested alternative ways of establishing first contact with the target persons, we decided to pursue a combined approach for recruiting participants. On the one hand, we shall carry out a conventional mail survey. The invitation to participate in the NEPS, a short questionnaire, and reminder letters will be mailed by the administrative units of the higher education institutions. On the other hand, we shall personally invite the target persons and employ group-administered questionnaires in courses targeted at or mandatory for first-year students. Whereas the first method ensures that everybody has the chance to participate, the second method yields higher response rates. A pilot study conducted in 2009 confirmed the advantages of combining mail surveys and surveys in the classroom.

Instrument development

The survey instruments are being selected and developed in a joint effort by the pillars and stages of the NEPS. If valid and suitable instruments do not exist, new questionnaires have to be constructed and validated. The higher education stage, for example, wants to measure the constructs of academic integration and social integration that are central to Tinto’s model of student dropout (cf. Tinto 1975; see also, Sect. 17.2.3). Because the available instruments are too long to be used in the NEPS and, in addition, are not tailored to the German higher education system, we had to select items and modify them. Another example refers to the measurement of the formal learning environment (see Sect. 17.2.2).

In these and other cases, we proceeded as follows: On theoretical and empirical grounds, we constructed a first version of the instrument. We then conducted cognitive interviews (cf. Willis 2005) to gain insight into how students understand and interpret questions and codes and how they reach their answers. In a third step, a revised version of the instrument was administered to a subsample of students who agreed to participate in online surveys carried out by the HIS-Institute for Research on Higher Education and to give information on current topics of higher education research and policy at regular intervals over a longer period of time (“HISBUS online panel”). Up to now, four of these “developmental studies” have been carried out. The sample size varies between 615 and 803.

Another element of quality assurance and enhancement introduced into the NEPS is the “pilot studies.” These studies are conducted with a small sample of the target population one year before the main data collection will start and are intended to evaluate the feasibility and design of the study.

First experiences, problems, and consequences

By and large, the goals set for the first steps of instrument construction and data collection could be achieved. However, the pilot study and new developments revealed problems that had or still have to be solved.

Our study, for example, claims to account for the diversity of higher education institutions and students. Therefore, the measurement of the formal learning environment has to be applicable to different types of higher education institutions and study programs. In the case of correspondence courses, this proved to be particularly challenging. Because suitable instruments do not exist, a productive cooperation with distance higher education institutions has been established.

According to our pilot study, the chosen approach is, in general, feasible and successful. It was, however, not easy to convince higher education institutions to participate in the study. Because their assistance is necessary, extensive efforts were made to contact every single institution personally, provide tailor-made information, and find ways of overcoming obstacles to participation. In combination with the support of important stakeholders in the higher education system, these measures resulted in a satisfying participation rate: Almost 75% of all higher education institutions finally agreed to take part in the study.

As regards response rates, the approach to recruiting students proved to be promising (see Sect. 17.3.2). For the main survey, we additionally started an information campaign in the media used by students. Participation in the competence test and online surveys, however, did not meet our expectations in the pilot study. As a consequence, we, on the one hand, rearranged the schedule of the tests to avoid an overlap with examination periods. On the other hand, we realized that the number and length of the online surveys must be reduced. In the pilot study, two online surveys were conducted shortly one after the other. It turned out that this places a too high burden on the participants. Therefore, in the main study, the two online surveys will be combined and the total survey length will be reduced.

Overall, the pilot study showed that measurements are valid and reliable, yielded data of high quality, and were received positively by the participants. Hence, we are confident that we shall be able to deliver high quality data to the scientific community covering a broad variety of research questions and burning issues in higher education research.