Keywords

1 Introduction

There is an increasing claim for managing engineering courses through competences. By the mid-1990s, the Accreditation Board for Engineering and Technology (ABET) recognized that the international challenge of competitiveness was in part a problem of competencies in engineering education and adopted a revolutionary method proposing the concept of student outcomes (SO) [1]. The importance of this approach is increasingly recognized, even in today’s digital transformation era. The purpose is to evaluate what students have learned instead of what students were being taught [5].

This paper focuses on ABET’s continuous improvement criterion, which states: “The program must regularly use appropriate, documented processes for assessing and evaluating the extent to which SOs are being attained. The results of these evaluations must be systematically utilized as input for the continuous improvement of the program” [1]. The continuous improvement process should be designed to evaluate PEOs (Program Educational Objectives) and SOs (Student Outcomes). PEOs are “broad statements that describe what graduates are expected to attain within a few years after graduation”, while SOs are defined as “what students are expected to know and be able to do by the time of graduation” [1]. Recently, ABET disclosed a new version of SOs to the 2019−20 accreditation cycle. According to ABET, SOs are outcomes (1) through (7) plus any additional outcomes that may be articulated by the program, as seen next. Indeed, as the new SOs version is very recent, this paper brings out a pioneering approach.

  • SO 1: An ability to identify, formulate and solve complex engineering problems by applying principles of engineering, science, and mathematics.

  • SO 2: An ability to apply engineering design to produce solutions that meet specified needs with consideration of public health, safety, and welfare, as well as global, cultural, social, environmental, and economic factors.

  • SO 3: An ability to communicate effectively with a range of audiences.

  • SO 4: An ability to recognize ethical and professional responsibilities in engineering situations and make informed judgments, which must consider the impact of engineering solutions in global, economic, environmental, and societal contexts.

  • SO 5: An ability to function effectively on a team whose members together provide leadership, create a collaborative and inclusive environment, establish goals, plan tasks, and meet objectives.

  • SO 6: An ability to develop and conduct appropriate experimentation, analyze and interpret data, and use engineering judgment to draw conclusions.

  • SO 7: An ability to acquire and apply new knowledge as needed, using appropriate learning strategies.

The determination of where, how, and when to assess SOs must be defined by each engineering program individually [5] and there are few papers already exploring the process of ABET accreditation assessment [3, 5]. Awoniyi [2] presents a template that can be used to organize the efforts to satisfy ABET EC 2000 requirements, focusing mainly on criteria 2 and 3 [2]. Felder and Brent [6] also focus on assessment criteria, but the authors bring out an additional contribution since they make clear the difference among some important concepts such as objectives, outcomes, and indicators [6]. McGourty et al. [8] present a more comprehensive approach through the proposition of a five-step process to assess program and make it a model of continuous improvement. There are also other authors that describe their own accreditation process experience. Lohmann [7] describes the Georgia Tech practice and Schachterle [14] approaches the implementation case at Worcester Polytechnic Institute.

Despite the existence of these papers exploring the ABET accreditation assessment, the challenges for establishing such a process are still unclear. A point of attention is to consider both ABET scenario and particularly institution context. Beyond that, there is no publication that proposes a suitable model to Brazilian Institutions in Engineering Education.

In doing so, this paper proposes a Continuous improvement framework to the Engineering Education area, including Performance Measurement Systems (PMS). We propose an eleven-step systematic process to develop an integrated assessment of engineering programs. The procedural framework includes considering external and internal requirements and is based on an in the deep bibliographic review which is not the focus of this paper.

The Industrial Engineering (IE) Program at PUCPR, in line with its efforts to improve and maintain the quality of engineering education, initiated external evaluations towards accreditation by the Accreditation Board for Engineering and Technology (ABET). The proposed framework is tested in the IE Program at PUCPR, located in Brazil. By means of a qualitative approach, it uses action research, since the authors are involved with the development and testing of the proposed framework. Action research is the methodology used in projects in which practitioners seek to effect transformations in their own practices.

2 Proposed Continuous Improvement Framework

This paper uses Platts and Gregory [10] model as a strategy to propose a framework to continuously improve an engineering program. These authors propose a tool to conduct audits to the manufacturing strategy formulation process. They suggest some steps, through worksheets (WS),  for manufacturing audit in the process of strategy formulation. Such steps are used as a reference to develop a proposed framework that seeks to attend the continuous improvement of ABET requirement. Table 1 shows Platts and Gregory’s propositions in the two first columns and the equivalent in the proposed framework in the remaining columns.

Table 1 Proposed framework steps

2.1 Framework Steps

Steps presented in Table 1 are also coherent with the DMAIC Cycle and allow the development of an integrated assessment of engineering programs (see Fig. 1). The implementation at PUCPR is described in each step and lessons learned are shared. PUCPR’s IE Program, in Curitiba, Brazil, has around 600 students and started its activities in 1998. The program has started to apply this continuous improvement framework in the first semester of 2017.

Fig. 1
figure 1

Continuous improvement processes of the Industrial Engineering program at PUCPR

Step 1—Identify market view of competences. PEOs must reflect the needs of the program’s various stakeholders [1]. That is why getting plenty of external views is included in this framework step. This step covers the gathering of specific views concerning the professional market and requirements for an Industrial Engineer. It should be carried out every three years. This is of primary importance since it is only possible to develop students according to necessities if market expectations are well known.

At PUCPR, structured interviews and surveys were conducted in this phase with IE professionals, seeking to reflect the needs of the program’s various stakeholders. They were asked to list the important technical knowledge, abilities and behavioral factors desired in an Industrial Engineer. In 2016, 17 interviews were carried out with professionals from industry, including alumni that graduated between 2006 and 2015. In 2018, a survey was promoted to cover a larger number of respondents. 869 alumnis were invited to answer the survey, and a final number of 83 responses was obtained, which is equivalent to about 10% participation.

Step 2—Define/Review competitive professional profiles (PEOs). This step is about the establishment of PEOs which are considered a way to declare external expectations. They need to derive from the institution’s vision. Based on the results of the previous step, PEOs were written at PUCPR and validated with PUCPR’s IE faculty. The establishment of an Industry Advisory Board (IAB) from market is considered in this phase to discuss program structuring, always looking forward to being aligned with external claims. This IAB is composed by faculty and market professionals of different companies. PUCPR’s IE Program promotes an IAB meeting twice a year as part of the process of understanding program’s various stakeholders needs.

The first PEOs declaration proposal was discussed and validated by the IAB in October 2017. Once they were validated, the timeframe for alumni to achieve the PEOs is between 3 and 5 years. The PUCPR PEOs first version is as follows:

  • PEO 1: Enhanced organizational performance through assertive decision-making in projects and operations management.

  • PEO 2: Created value for stakeholders by promoting innovative solutions (product, process and technology) or by solving complex problems.

  • PEO 3: Performed as a transformer of the existing  reality, in an ethical and sustainable way, striving for continuous education.

  • PEO 4: Lead and motivated multidisciplinary team member through communicating appropriately for the context in an assertive manner.

Step 3—Identify requirements for competences. Program SOs and PEOs must be coherent with a set of internal and external requirements. This is context-driven and depends on each university. Elements such as strategic vision and internal and external political requirements should be considered. In case of PUCPR’s IE Program, there are internal requirements from the pedagogical university department to be considered, and external requirements from MEC (Brazilian Ministry of Education) and the Brazilian Board of Engineers (CONFEA/CREA).

Step 4—Define Program SOs. ABET establishes a reference model to SOs definition, as it prescribes a well-known list of expected SOs. The set of PEOs drives the assessment process, therefore it is important to have completeness between PEOs and SOs. Based on program characteristics, PEOs and ABET (1)−(7) SOs recommendations, SOs must be defined in this step. ‘Competitive criteria’ [10] are considered the SOs in the proposed model. SOs from PUCPR’s IE Program have the same description as suggested by ABET. The relationships between PEOs and SOs are as follows: PEO 1 helps in SOs 1, 2, 4, 6 and 7; PEO 2 helps in SOs 1, 2, 4, 6 and 7; PEO 3 supports all SOs; PEO 4 contribute to SOs 3, 5 and 7.

Step 5—To Develop a Performance Measurement System (PMS). According to ABET criteria 2018−2019, the extent to which student outcomes are being attained needs to be evaluated and documented. It can be accomplished through direct and indirect measurement processes.

Indirect assessment is the evaluation obtained without directly observing the students work. This kind of assessment is important to evaluate specific cases, especially regarding professional skills, which are difficult to evaluate by traditional direct assessment methods [13]. Direct assessment can be obtained in class exams, written lab reports, National Standard Tests and performance evaluations in oral presentations. As indirect assessment examples, the author proposes student perception surveys, graduate school placement rates, employer or alumni surveys and senior exit interviews [5].

Direct assessment is when the evaluation is directly performed from student work. It can be compiled with well-defined indicators. ABET defines that the indicator is what faculty are going to look for in student performance to have confidence that, by the end of the program, students can demonstrate the learning outcome.

At PUCPR, the evaluation of SOs attainment is accomplished through direct and indirect measurement processes, as detailed in Table 2.

Table 2 SOs measurement

The first is performed by Program Criteria (PC) evaluations, and the second by a set of surveys. The senior student survey seeks to ponder the perception of the level of SOs development and satisfaction within the program and should be conducted with last semester students by the time of graduation. PUCPR’s IE Program conducted its first PC evaluation in the first semester of 2017 and is in the fourth measurement cycle. The evaluation through the PCs encompasses the design of the PMS, in which steps are proposed to develop a PMS coherent with the context of measuring performance in Engineering Education.

The definition of high-level PCs associated to each SO is included in this step. In this way, it is important to guarantee that the set of PCs embraces the intention of each SO. Each SO should be associated with two or more PCs describing the characteristics, skills, knowledge, attitudes, and/or values that students must exhibit to demonstrate the achievement of an SO. To have completeness in PCs definition, Pettigrew et al.’s [12] framework was used as a foundation to define the indicators. To fulfill it, each SO has PCs regarding context, content, and process. The context can be both external and internal. The first regards to the economic, political, and competitive environment in which an organization operates. The internal context refers to the structure, corporate culture, and politics. Content is about the area of transformation under examination, as technology, manpower, products, geographical positioning, or organizational culture. It regards to objectives and assumptions, targets, and evaluations. Finally, process regards actions, reactions, and interactions from the various interested parties as they seek to move the firm from its present to its future state [11].

At PUCPR, after meetings involving all faculty, PCs were defined. An example for PCs defined for PUCPR’s IE Program are presented in Table 3. The PCs are assessed on courses and an evaluation is conducted by each responsible faculty.

Table 3 PUCPR’s IE Program Performance Criteria (PC) for SO 1

PCs are mainly assessed in courses and the evaluation is conducted by each responsible faculty. It is recommended that PCs of the same SO be evaluated in different courses. To have an overview about courses that can measure each PC, it was suggested the development of a correlation matrix, attributing in which level each course is able to develop each PC. Three levels of contribution can be determined, for example.

PUCPR’s IE Program faculty are invited to participate in the process of mapping SOs and PCs correlation through a survey. The used correlation levels of each course in the PC were introduce, reinforce and emphasize.Each faculty attributed the level of correlation for courses that they felt comfortable to analyze. Only specific program courses were considered in the mapping, as this is easier to manage within faculty under the leadership of the program. Through the result of this mapping, it was possible to select the courses able to measure each of the PC. Furthermore, this mapping provided a holistic view about SO development, making it possible to know at which stage a SO is developed and, then, contributing to defining the requirements for each semester.

The Performance Measure Record Sheet was then developed to formalize the PC standards. Such a sheet is based on Neely et al. [9] that proposed the performance measure record sheet, summarizing works approaching what a good performance measure constitutes. Each PUCPR’s IE responsible faculty must detail and document the measurement strategy for the respective PC through the ‘Specific PC Standards Sheet’. A template can be seen in Fig. 2.

Fig. 2
figure 2

Specific PC standards sheet

Steps 6 and 7: Direct and Indirect Assessment Evaluation. A simplified sample of measurement results is presented in Fig. 3, which represents the direct assessment report. As mentioned before, there is also the Senior Student Survey an indirect assessment process to avoid bias on results. It encompasses another perspective of evaluation: the student view. A senior student survey is planned to collect student’s opinions about the contribution of PUCPR’s IE Program in developing each SO. Such a survey also looks at understanding student’s satisfactions and employability data.

Fig. 3
figure 3

Sample of SO evaluation through PC

Steps 8 and 9—List Opportunities and Threats/List Existing Practices—Causes. This is an analytical step that seeks to summarize results from direct and indirect measurement as opportunities or threats. This is important to avoid threats and explore opportunities within action plans. It is vital to recognize results lower than expected and investigate reasons to such results. A well-developed root cause analysis is of primary importance to develop a consistent action plan and should be developed in this step. A continuous improvement group can be established in this phase. Based on direct and indirect measurement results, PUCPR’ IE Program defined priorities to take actions. An annual meeting with faculty is organized to discuss results and defining priorities for action. A root cause analysis is conducted to prioritize weak points selected by faculty.

Steps 10 and 11—Develop Improvement and Corrective Actions/Follow Actions. Actions should be established considering the analysis in Step 9. This is the key step to stimulate continuous improvement. The action plan introduces alternatives to address poor results. Additionally, this phase includes the daily management of planned actions and results, to guarantee continuous improvement. It is important to check realization and results of undertaken actions.

An action plan is established seeking to improve PUCPR’s IE Program results. Additionally, always when a weak point is identified, an improvement plan is also required. PUCPR’s IE Program has developed an improvement procedure that encompasses the steps to guarantee the process realization in long term. The established actions must be implemented, and it is the responsibility of the Program’s leadership to ensure that the actions are carried out.

The eleven presented steps, in this sequence, are part of a continuous improvement process. The stages need to be performed frequently. To be a feasible process, different frequencies to realize each step are suggested, as some processes are more demanding. Keys for an effective assessment tool requires low faculty effort to develop, administer and maintain the process [3]. Steps 1−4 can be developed every 3 years, but Steps 5−11 need to be developed every semester, to collect data from a considerable number of students and to implement improvement actions more dynamically.

3 Conclusion

The paper attains the objective of proposing a process to continuously improve performance in the context of Engineering Education. The developed framework needs to be implemented with faculty support. In doing so, it is important to make the process easy to be used in the faculty’s routine. PUCPR’s IE Program has a continuous improvement procedure that documents every criterion in a more detailed way.

There are some opportunities for improvement in the presented framework. It is recommended to expand the market view, collecting a wider overview about market requirements for an IE Program. It is possible to enhance the quantity of interviews and apply other methods of data collection to accurately map alumni profile. The application of a survey is suggested to get more opinions from different stakeholders. The CDIO questionnaire can be used as the basis for this survey [4]. It is recommended  to conduct this survey with alumni, market professionals, and faculty.

As a future opportunity of work, necessity to evaluate the consistency of the proposed model is pointed out. It is believed that, to be effective, the process of measuring and improving SOs must be coherent with external requirements, regulatory institutions such as MEC (Brazilian Ministry of Education) and the Brazilian Board of Engineers (CONFEA/CREA), and have internal needs reflected by the strategic vision of the educational institution.