Introduction

Assessment in education has intrigued practitioners, academics, and researchers (Kabilan and Khan 2012). How to increase educational achievements has been debated in education policy and it has been universally agreed that one important factor determining student achievement is teacher quality (Azam and Kingdon 2015). Therefore, increasing attention has been focused on outcome-based approach which holds teachers receive achievements from students. There has been a long time that the major tool implemented for teacher education was student assessment of teachers’ teaching through teacher evaluation questionnaire and still it is one and all tool in many countries around the globe (Marsh et al. 2009). Feistauer and Richter (2016)’s study of the reliability of student evaluation of teachers indicated that the major variances in student evaluation rely on students’ perception of good teaching practice (Renaud and Murray 2005), student/student/teacher interaction. Type of the course also moderates their evaluation of teacher competences (Gillmore 1977). Therefore, they suggest student evaluation of teachers to be interpreted with caution.

Alternative forms of assessment gained recognition in documenting improvements and it received more values and feasibility with the introduction of information and communication technology (ICT) to education. Different forms of assessment including summative assessment which aims at assessing predetermined objectives are achieved so that certification can be granted and formative assessment which aims at improving learning through evaluation along with providence of feedback and scaffolding in interaction and fine tuning teaching and learning acts (Gikandi et al. 2011; Hargreaves 2007; Llamas-Nistal et al. 2013; Tarighat and Khodabakhsh 2016). The distinction between formative and summative assessment relies on process-oriented versus product-oriented approach towards assessment (Hargreaves 2007). To operationalize key concepts of formative assessment, related concepts such as portfolio assessment supported by research to nurture higher order thinking, reflection, respond individual needs and powers and instigate variety which are all absent in summative assessment (Yurdabakan and Erdogan 2009) and pear assessment which requires student teachers’ evaluation on their colleagues’ performance on the basis of the criteria of excellence where they fine tune their comments to their zone of proximal development (ZDP) gained importance (Tseng and Tsai 2007).

ICT can provide a medium in which the concepts related to formative assessment can be reinforced. Various studies investigated how ICT helps evaluation and assessment and what is shared among them is the introduction of computer-aided assessment tools which use objective tests or cloze, true/false end tests, and give a digital report of correct answers to teachers and students (Siozos et al. 2009). Missing from them is the interactive platform by which the examinees’ true knowledge and skills can be assessed in an authentic way (Huff and Sireci 2001). These studies are mostly about how summative assessment can be accomplished using ICT (Gikandi et al. 2011) and how online formative assessment can bring insights into computers serving assessment and in turn learning remains understudied. The rational of this study is discussed as follows:

  1. (i)

    The existing studies on teacher evaluation are product-oriented in nature which provides little information on teaching practice (Bastian et al. 2016; Henry et al. 2010; Skedsmo and Huber 2017). The product-oriented approach towards teacher education includes teacher evaluation questionnaires, classroom observation, teacher individual interviews, teacher self-evaluation, and teacher testing which suffer from either reliability concerns or providing little insights about teaching practice improvement (DeLuca et al. 2016; Duckor et al. 2014; Feistauer and Richter 2016; Marsh et al. 2009; Santiago and Benavides 2009; Smith et al. 2004). The inadequacy of the product-based approaches towards teacher evaluation led practitioners to think of more process-based teacher evaluation (Darling-Hammond et al. 2013) in which teachers notice the moment to moment construction of their teaching (Navidinia et al. 2015).

  2. (ii)

    Not equally well documented in research is how alternative assessment potentials mediated by computers are practiced in teacher education (Kabilan and Khan 2012) and the existing literature identifies several shortcomings with the educational systems that utilize less integrated practices such as e-portfolio writing of any type, in the absence of interaction with more knowledgeable peer, the knowledge constructed with scaffolding with the self may not be internalized (Wu and Pedersen 2011). Computer-based scaffolding should be conducted through interactions which facilitate metacognition and refection upon thinking processes. Various agents should provide scaffolding and no single tool meets all purposes.

  3. (iii)

    There has been no research investigating the potential of online teacher summative and formative in comparative mode and novelty of the present study lies not only in its comparative investigation but also in utilizing new online instruments that are potentially contributive to the field as they provide mediums for collaboration among the participants.

To achieve the objective of this study and fulfill the rational of the study which is investigating the effect of online summative and formative assessment through e-portfolios and electronic collaborative discussion forum (ECDF) on teacher competence improvement, the previous research on online formative and summative assessment was reviewed to have appropriate knowledge in organizing and designing the instruments of the present research, the instruments were deigned and evaluated to assure their psychometric properties and the results are critically reviewed and discussed to offer implications and potentially interesting areas of research missed in this study.

Literature review

Teacher education, teacher evaluation, and teacher competences

There are several studies on the necessity and importance of teacher evaluation for making principled and informed decisions for quality improvement (Abbas 1994; Al-Thumali 2011; Navidinia et al. 2015). Teacher profession is construction and reconstruction of one-self through studying the factors that determine its quality (Potolea 2008). Teacher development requires construction of teacher competences which are professional skills that help teachers to have successful performance (Blašková et al. 2014) through critical analysis of teaching act and polity planning which leads to the development of designs that are compatible and coherent with respect to teaching and learning process and educational objectives. Teachers’ development of teaching competence equips teachers with cognitive and affective factors to solve problems in an efficient, coherent, and dynamic way (Duţă et al. 2014).

Four teacher competences are identified by Zimpher and Howey (1987) and they are as follows:

  1. 1)

    Clinical competence teachers’ skills in immediate response to expected and unexpected problems in teaching practice and solve them efficiently.

  2. 2)

    Personal competence teachers’ interactive capacities in establishing an appropriate relations and interactions with the self, students, and colleague.

  3. 3)

    Critical competence critical evaluation of social inequities and reconstruction of social practices.

  4. 4)

    Technical competence preplanning teaching and learning activities and how they are going to be measured.

How the teacher competences can be improved and how the competence improvement can be tracked has been at the center of attention. Action research was utilized for identifying incidences which indicated competence development and improvement (Lasauskienė et al. 2015). Then, there were various attempts in evaluating teachers to track indictors in performance which reflect competence improvement and transformation. Traditional approaches included teacher evaluation questionnaires, teacher interviews, and teacher portfolio which evaluated immediate performance rather than underlying processes one (Imhof and Picard 2009) and provide no clear idea of what good teaching practice is (Mansvelder-Longayroux et al. 2007).

Recently, Balanced Score Card (BSC) which was originally used for “translating the organization’s strategy and vision to objectives and measures and targets from finical, customer, internal business processes perspectives” (Hughes and Pate 2012, p. 59) to maximize product cell and high income is now introduced as instrument for quality assessment in educational settings. It can evaluate the aspects which are beyond student capacities to rate. Table 1 shows what information can teacher balanced score card provide form different perspectives including institutional, departmental/administrative, and learning and growth perspectives which are proposed by Hughes and Pate (2012).

Table 1 The classic balanced scorecard (BSC) versus the teaching balanced scorecard (TBSC)

There are several attempts to identify the teacher performances relating to teacher competencies. Lasauskienė et al. (2015) used educational project method of development (action research). In this method, teachers verify in practice, and evaluate and ground new educational ideas and performance. The results of the analysis of students’ reflections elicited the themes with related categories and subcategories on which teacher performances are based. For example, one of the points that students reflected on was “working in this project allowed me to make various decisions.” This caused the teacher to assign a category of “independent involvement and encouragement of self-involvement.” In this study, the researcher used guidelines of Competency Framework for Teachers proposed by department of education and training in Australia (2004) in preparing the indicators of teacher competences. The procedure for operationalization of teacher competences in practice was discussed in “Method” section of the present research. The score balanced card in this study is prepared to evaluate the teacher competences and how they are developed through portfolio and teachers’ collaborative discussions.

ICT and teacher education

With the advancement of ICT, computers are no longer considered as a tool to store, analyze, or exchange information; rather they are considered as a learning tool (Gil-Flores et al. 2017). Accordingly, how ICT can help pre-service and in service boost their teaching has been investigated thoroughly. The studied range from the impact of computer mediated teaching on teacher knowledge (Zottmann et al. 2013), investigation of determining factors in the teachers tendency to integrate ICT into their classes (Kreijns et al. 2013), validation of instrumental and methodological aspects of ICT use in classes (Gold and Holodynski 2017), investigation of psychological and emotional dimensions of ICT integration(García-Martín and García-Sánchez 2016), the leadership and infrastructure challenges (Chua and Chua 2017; Gil-Flores et al. 2017).

ICT and teacher assessment

There are various studies documenting how ICT helped assessing teachers including self- and peer assessment, video-based assessment, and portfolio writing. Questionnaire and self and peer assessment on Wiki writing projects indicated that pre-service teachers consider formative assessment including self-assessment through wiki writing and peer assessment approaches helpful (Ng 2016). The effect of peer assessment on problem-solving skills of teachers supported by online learning activities indicated that peer feedback affected teachers’ performances in problem solving and the results also indicated that feedback function and direction predicted feedback use (Çevik et al. 2015).

Video-based assessment of ones’ teaching and content analysis of the video recording and pre-post student survey responses indicated that extensive behavioral modeling, targeted behavioral modeling, and independent problem solving are the three elements related to efficacy of video-based assessment in teacher development (Koh and Chai 2016). Zottmann et al. (2013) indicated that the position of annotations made by learners during their case analysis of digital video cases in teacher education in computer-supported collaborative learning displayed positive correlation between application and acquisition of professional knowledge. The qualitative and quantitative in-depth analysis of teachers’ perspective on the use of digital video annotation where teaching candidates could put their explicit and implication comments on ones’ teaching indicated that it is important to address cultural and psychological aspects that control ones’ emotion when one’s teaching behavior is assessed (Picci et al. 2012).

Kabilan and Khan (2012)’s study of e-portfolio writing indicated that it helped teachers to identify their strength and weaknesses and it improved six competences including “developing understanding of an effective teachers’ role, developing teaching approaches /activities, improving linguistic abilities, comprehending content knowledge, gaining ICT skills and realization of the need to change mindset.” Remarkable change is also reported in teachers’ formative assessment practices as a result of early childhood education teacher e- portfolio writing (Hooker 2015). Content analysis, reflections, and interviews of EFL teachers’ e-portfolios indicated how teachers’ documentation, organization, creation and shared information, and materials in designing their e-portfolio contributed to their professional development (Bala et al. 2012).

With the introduction of social constructivism to learning, teacher learners’ development is much of a more connected process-oriented endeavor rather than a product information processing one (Dede 2006). Hence, teachers were not seen as conveyor belts conveying information from supervisors to students. Teaching knowledge is constructed through interaction between the discourse communities. Respectively, the negative climate associated with teacher evaluation through scale-based evaluation of acceptable, satisfactory, or the like is mitigated by supportive and collaborative climate of conversations (Danielson 2001; Danielson and McGreal 2000). Various online collaboration tools were proposed to foster social constructivism. Telecollaboration, an online intercultural exchange, is reported to effectively influence teacher perception of IT integration in classes and its implementation, and in turn student satisfaction with urging for teacher presence (Turula 2017). A study on online teacher collaboration has also indicated that it may help teachers acquire procedural knowledge and skill (competences) (Vinagre 2016). Membership categorization analysis of computer-mediated technology used in the study by Cho (2016) indicated that pre-service teachers establish community building features that help them in mutual engagement, joint enterprise, and shared repertoire. In addition, content analysis and Chi square of 35 h of coded design talk in collaborative talk during the design of ICT lessons indicated that teachers consider pedagogical content knowledge derived from emphasis on idea development, perception institutional consideration, and interpersonal factors as key parameters in designing ICT lessons which implies professional development through collaboration (Koh and Chai 2016). Online communication and online help-seeking when mentoring failed in pre-service teacher education classes were reported to affect self-efficacy, epistemological belief and perceived benefits, and affected their subsequent self-regulated learning as established (Liu 2017).

The study

The novelty of this study lies in its attempt to account for the need for comparative study of different types of alternative assessments in online modules, the need for more integrated and connected online techniques to foster constructivism in student teacher education, and its utilizing new techniques which are potentially contributive to the field. The present study is intended to investigate the effect of online summative assessment and two forms of online formative assessment, e-portfolio and electronic collaborative discussion forum, on teacher competence improvement. The following research questions are postulated to find the answer. Since the design of the study is time series design, each question was stated in such a way that caters the effect of previous assessment intervention.

  1. 1.

    Does summative teacher assessment have any significant effect on the improvement of Teacher competences of Iranian EFL Teachers?

  2. 2.

    Does teacher online portfolio writing have any significant effect on the improvement of Teacher competences of Iranian EFL Teachers after removing (controlling for the) the effect of summative assessment?

  3. 3.

    Does ECDF (electronic collaborative discussion forum) have any significant effect on the improvement of Teacher competences of Iranian EFL Teachers after removing (controlling for) the effect of teacher online portfolio writing?

Method

Teacher participants

Thirty Iranian MA male (n = 9) and female (n = 21) students of English as foreign language studying at state and private universities in Iran and working in English language center of Islamic Azad university, Karaj branch (KIAU) with the teaching experience of 5–8 years were invited to take part in the study. They ranged from 24 to 30 years of age (mean = 27). Since it was a free teacher education in summer school of the researcher’s institution, teachers all voluntarily took part in the study. Three criteria were utilized in inclusion of the teachers in this study. First, although they were more than 30 teachers in the program, only those with the same teaching experiences and those that were teaching at the time of the study were included. Second, teachers who were considered as learner-centered teachers with technology on the basis of their answer to a teacher-type questionnaire taken from Admiraal et al. (2017) were included in the study. The questioner identifies five types of teachers including learner-centered with technology, teachers critical of technology use in school, teachers uncomfortable with technology and teachers uneasy with learner-centered teaching, and teachers critical of a clear cut stance. Third, teachers who declared high computer confidence and high frequency of computer use in their registration form into the program were included. This research was a self-funded project. To observe ethics in research, teachers were informed about the research and were assured that their responses were confidential would only be used for research purposes and they signed a consent form for perusal of their responses in this project.

Assistant researchers

Five male (n = 1) and female (n = 4) Ph.D. candidates doing their Ph.D. program in teaching English as foreign language at the researcher’s institution were invited to assist the researcher and were paid, respectively. They ranged 27–36 in age (m = 31.5). They gave their best shot into the study on several occasions including participant selection phase, teacher briefing sessions on e-portfolio writing, and membership acceptance stage of e-collaborative discussion forum (ECDF). They also conducted classroom observation, rated e-portfolio writing through e-writing forum (EWF), and conducted log analysis of ECDF to fill out teacher balance score card for summative assessment (TBSC). TBSCs were filled for teachers four times: (a) at pretest stage after video- based teacher induction and online summative assessment through classroom observation, (b) after video-based teacher induction and online formative portfolio writing through EWF, and (c) after video-based teacher induction and online formative collaborative discussion though (ECDF).

Instruments

Since there are three assessment interventions in this study, the e-writing forum was implemented and manipulated in three ways. In online summative assessment, after teachers received professional videos as teacher induction, they were asked to have accounts in the forum and teachers were provided with ‘assistant researcher’ analytic comments on their teaching act in the classes observed with assistant researchers. In online formative e-portfolio writing, the forum was utilized in such a way that students could archive their reflection on the teaching act they had after each session of classroom professional video- based teacher induction. They were instructed about portfolio writing. At this stage in the forum, their online formative portfolio assessment was done individually by the teachers. Their archives of e-portfolios were accessible by the assistant teachers as admins of the website. At collaborative discussion online formative assessment (ECDF) stage, the forum is used collaboratively by the students. The admins group teachers into six groups of five students. They could use the connective potential the website which is discussed later in this section.

E-writing forum (EWF)

A website named E-writing forum (e-writingforum.ir) was launched to achieve the objectives of the study. The website’s user friendliness and its potential in creating a medium for research purposes were examined by other studies of the author (Mohamadi 2018a, b; Mohamadi and Malekshahi 2018; Mohammadi 2017). Some of the features of this website are as follows (1) sharing with anyone in such a way that no finished file is uploaded; (2) accept or reject changes which means the possibility of tracking the changes and making control of what makes into the writing tasks and what does not; (3) in line comments which are provided through collaboration on specific pieces of text; (4) discussion tools by which participants could share ideas, review changes, and gather feedback in one place. The website had also the possibility of uploading and downloading any sort of file. Teachers were supposed to plan, organize, monitor, analyze, synthetize, and asses and evaluate their writings through this medium. Some of EWF’s potential could not be used because students were not grouped in forum for summative assessment phase. Students could have interactions with teachers, though. The reason was the type of assessment intervention under study. Only in collaborative writing followed by online formative assessment students had collaboration potential of EWF to collaborate with other students.

E-portfolio writing

Teachers were asked to write individual e-portfolio writing. Two sessions of portfolio writing exercises were held with the teachers in groups of 10. The purpose was giving the teachers a concrete example of what portfolio is and how it should be written and gives them the opportunity to start their own portfolio writing. Teacher portfolio consisted of eight components each requiring a specific assignment: “(1) roles, responsibilities, and goals, (2) representative course materials, (3) assessment and extent of student learning, (4) descriptions and evaluations of teaching, (5) course and curriculum development, (6) activities to improve your and others’ instruction, (7) contributions to institution or profession, (8) honors or recognitions” which were designed, organized, and validated in similar studies (Mohamadi 2018a).

Teacher writing portfolio consists of reflective evaluation of their growth, references to the evidences of growth by providing the best exemplar from the archive of teaching they have, their future vision of the problems they have in teaching and how they are going to solve them, and their evaluation of feedback they received from the mentors and how they respond to the comments. Teacher portfolios were assessed according to a predetermined scoring scale which is discussed later in this section.

E-collaborative discussion forum (ECDF)

In order to assist the teachers to integrate the knowledge and insights they receive through professional video-based practice of teaching writing, they were grouped into six groups of five students. This time, they could use the connecting potential of the forum to share the teaching insights they receive through video watching, share and collaboratively construct knowledge, reflect collaboratively, and identify teaching problems they have and suggest solutions. Assistant researchers were asked to observe the group discussion on ECDF without taking any participation in discussion, their account was hidden and teachers were unaware of the observation assistant researchers had on ECDF. Research assistants were supposed to complete TBSC on the basis of log analysis of ECDF to find indictors of competence change from previous e-portfolio writing assessment intervention. The scoring on log analysis of ECDF is discussed in the following section. The sample episode of teacher discussion was provided in Appendix 1.

ECDF and e-portfolio scoring schemata

Bakker et al.’s (2011) schemata were used to assess electronic portfolio of teachers and log analysis of ECDF. The schemata required:

“assistant teachers look for negative and positive evidences of teacher competence, look for (counter) evidences of what contributes to professional thinking and acting, differentiate less and more important evidences and assign score, specify if entire performance can be attributed to specific level of competence, and write a brief summary in which comments on scores were given and important arguments and evidences are cited and consult follow assessor can compared the assigned scores and discuss the assigned scores and the rational by providing evidences and arguments and determine whether to hold on to the original score or make adjustments (Mohamadi 2018a, p. 33).”

The scoring method was designed in such a way that personal beliefs of assessors were kept to a minimum degree. Besides, inter-rater reliability of five research assistants scoring of teachers’ electronic portfolio writing was considered as an index for reliability. There were significant agreements between the raters who rated the teachers’ performance on teachers’ portfolio writing on TBSC (r (28) = .911, p = .001, representing a large effect size). There were significant agreements between the raters who rated the teachers’ performance on electronic collaborative discussion forum (r (28) = .891, p = .001, representing a large effect size).

Teacher balanced score card

An inventory for teacher competence evaluation with 65 items rated on five Likert scales of unacceptable, slightly unacceptable, neutral, slightly acceptable, and acceptable points which was designed and validated a research by Mohamadi and Malekshahi (2018) was utilized in this study (Appendix 2). Table 2 was taken from the aforementioned study to show the structure of TBSC.

Table 2 Structure of teachers balanced score card

To fill the TBSC on pretest and posttest stages, assistant researchers used the first classroom observation scores, content analysis of teachers’ e-portfolio and log analysis of ECDF, and assess teachers’ competence status at the onset of the study as pretest at the last session as posttest. The purpose was to track changes in teacher competences measured on TBSC in this study. There were significant agreements between the raters who rated the teachers’ performance on a) pretest of teachers’ balanced score card (r (28) = .815, p = .001, representing a large effect size). Posttest reports on TBSC after each assessment intervention of summative, e-writing portfolio, and TCDF are reported in the related sections before.

Video-based teacher induction materials

The videos were selected on the basis of how well they cover teaching of different genres of writing and their audio and video quality. Series of professional teaching videos teaching how to teach writing were selected from YouTube.com. The criteria for choosing videos were of three types following the research by Zottmann et al. (2013). Connectivity was the first criterion which is about how well the videos are related to the teaching and learning theories. The more they are based on teaching and learning theories, the more connected they were. Therefore, the videos which pursue process writing were selected to best serve the purpose of the study. The second criterion was complexity and it is how suitable the videos are in terms of the level of teaching experiences teacher participants of the study had. The researcher selected the videos whose flow of information is easy to follow. The videos are professional videos of teaching writing. The third criterion is ambiguity which is related to the amount of possible distractors. The videos whose presentation was of high quality and clarity both visually and audibly were selected for the purpose of the study.

Drop in observations

The assistant researchers had four drop in observations of the teachers’ classes at four occasions. The observation was an open-ended non-participant observation. The teacher assistant were asked to rate the ‘teacher’ competences on the basis of performance indicators they observe in the class on TBSC measurement scale. Classroom observations were conducted for online summative assessment only. Since the class pace was high, they were required to audio record the class for later competence indicator coding. Since video recording was not possible due to administrative constrains of the institution, assistant researchers were asked to be aware of non-verbal incidences of competence change.

Procedure and data analysis

After participant selection, several debriefing sessions were held on how to work with e-writing forum. Teachers were instructed on how to have an account in the website. Information about different potentials of the website was provided both in terms of the content and procedure. Teacher induction classes were held the same way in terms of type of videos, classroom management, and teacher-directed feedback across assessment interventions.

The first phase of the study started with assistant teachers’ classroom observation of teaching act teachers did in their classes after professional video-based teacher induction to fill TBSC as pretest. Assistant researchers provided the results and also their analytic comments on teaching act of the teachers through EWF. This phase lasted seven sessions. At the end of the 7th session, assistant researchers fill TBSC as the first posttest to track competence change as a result of online summative assessment.

The second phase of the study started with teachers’ portfolio writing which marks the beginning of formative assessment. Since the study had time series design, the posttest of TBSC was used as pretest at this stage for the second round of treatment. Teachers were asked to have online portfolio writing. Teachers were debriefed on student portfolio writing and a sample portfolio and how it should be written was presented in class. Teachers were asked to have online portfolio writing after continued classroom video-bases teacher induction. Teachers’ online portfolio writing was assessed as mentioned before and students were provided with teacher comments on portfolio writing online through the writing website and teacher could interact with the assistant teachers. Online portfolio writing lasted for seven sessions. Assistant researchers’ assessment of student online portfolio writing was considered as online formative assessment. The second posttest of TBSC was filled on the basis of the indictors of competence change assistant teachers could recognize in e-writing portfolio.

Considering time series design of the study, the second posttest of TBSC marks the beginning of the third phase of the study which is online formative assessment through log analysis of electronic collaborative discussion forum (ECDF). At this stage, teachers were grouped into six groups of five teachers. They were asked to discuss critically about the induction they received after continued classroom video-bases teacher induction. The third TBSC posttest was filled by assistant researchers through log analysis of the discussion forum coding the indicators of competence change through open coding procedure. This study is a quantitative pretest posttest time series designed study. The first two research questions which investigate if the treatment “formative and summative assessment” had any effects on teacher competences were answered using t test statistical technique and the last research question which aimed at investigating if ECDF mediated the effect of two types of treatment on teacher competences was answered using ANVOA statistical technique.

Results

The present data were analyzed using paired-samples t test and repeated measures ANCOVA both of which assume normality of the data. As displayed in Table 3, the absolute values of the ratios of skewness and kurtosis over their standard errors were lower than 1.96; hence normality of the data was assumed.

Table 3 Descriptive statistics; testing normality assumption

Online summative assessment of teacher competences

A paired-samples t test was run to compare the teachers’ means on the pretest and first posttest administered after TBSC video-based induction in order to probe the first null-hypothesis. Based on the results displayed in Table 4, it can be claimed that the participants had a higher mean on the posttest of TBSC (M = 18, SD = 1.48) than pretest (M = 12.03, SD = .964).

Table 4 Descriptive statistics; pretest and posttest of teachers’ balanced score cards

The results of the paired-samples t test (t (29) = 42.72, p = .001, r = .992) representing a large effect improvement in their mean score from pretest to posttest. The r-effect size should not be mixed with the Pearson r value. The r-effect size was computed using this formula; sqrt (t2/t2 + df); i.e., sqrt (42.726*42.726)/((42.726*42.726)/29) = .992. The Cohen’s d for the results of the paired-samples t test was 4.46. Cohen’s d can be greater than one if the mean differences (18 − 12.03 = 5.97) are larger than any of the standard deviations; i.e., SD for pretest = .964, SD for posttest = 1.486.

The effect on online formative e-portfolio writing on teacher competences

After the first posttest, the teachers received electronic portfolio on video-based induction which was followed by the second posttest. A repeated measures ANCOVA was run to compare the participants’ means on the second and the first posttests controlling for the possible effect of their entry teacher competence ability as measured through the pretest. Before discussing the results it should be noted that the assumption of homogeneity of variances and homogeneity of regression slopes were not checked because the present study included one single group. However, the assumption of linear relatioship among the dependent variable, the second posttest, covariate, and pretest was retained. As displayed in Table 5, the results of the ANOVA test (F (1, 26) = 50.08, p = .001) indicated that the statistical assumption that there was not a linear relationship between the two variables was rejected.

Table 5 Paired-samples t test; pretest and posttest of teachers’ balanced score cards

Table 6 displays the effect size indices for the test of linearity discussed in Table 7 above. The results (r = .811, Eta = .812, and Eta squared = .659) all indicated large effect sizes. Thus, it can be concluded that the test of linearity enjoyed both statistical significance and large effect sizes.

Table 6 Measures of association between posttest of portfolio assessment and pretest
Table 7 Test of linear relationship between posttest of portfolio assessment and pretest

The SPSS produces for effect size values for the test of linearity assumption; r value which was discussed above, its squared value, eta and eta squared. These four values are interrelated.

  • .657 = .811 × .8111

  • .659 = .812 × .8122

  • .811 = square root of .659

R as an effect size has three values; .10 = weak, .30 = moderate, and .50 = large;

Eta and Eta squared as effect sizes have three values; .01 = weak, .06 = moderate, and .14 = large

Based on the results displayed in Table 8, it can be concluded that the participants after receiving electronic portfolio (M = 29.10, SE = .211) had a higher mean on the second posttest than first posttest (M = 18, SE = .129) controlling for the effect of pretest.

Table 8 Descriptive statistics; first and second posttest with pretest

The results of the repeated measures ANCOVA (F (1, 28) = 10.77, p = .003, partial η2 = .278 representing a large effect size) (Table 9) indicated that the participants after receiving electronic portfolio had a significantly higher mean on the posttest of e-portfolio than the first posttest of TBSC after controlling for the effect of pretest.

Table 9 Repeated measures ANCOVA; first and second posttests with pretest

The SPSS does not produce effect size values for one-way to n-way ANOVA; however, it produces partial eta squared for ANCOVA, MANOVA, and repeated measures.

The difference between eta squared, discussed above, the partial eta square is in their computation first;

  • Eta squared = SS between/SS total.

  • Partial Eta squared = SS between/SS between + SS error.

  • Field (2013, pp 533–535) discussed these effect sizes.

The effect of online formative collaborative discussion (ECDF) on teacher competences

After the second posttest, the teachers received electronic collaborative discussion forum (ECDF) on video-based induction which was followed by the third posttest. A repeated measures ANCOVA was run to compare the participants’ means on the third and second posttests controlling for the possible effect of their entry teacher competence ability as measured through the pretest and first posttest. That is to say; the teachers’ performance on the posttest of ECDF might have been affected by the pretest and also by the second posttest.

Before discussing the results, it should be mentioned that the assumption of linear relationship among the dependent variable, the third posttest, covariates which are the pretest and the first posttest was met. As displayed in Table 10, the results of the ANOVA test (F (1, 26) = 62.67, p = .001) indicated that the statistical assumption that there was not a linear relationship between the posttest of ECDF and pretest was retained.

Table 10 Test of linear relationship between posttest of ECDF and pretest

Table 11 displays the effect size indices for the test of linearity discussed in Table 10 above. The results (r = .828, Eta = .846, and Eta squared = .716) all indicated large effect sizes. Thus , it can be concluded that the test of linearity enjoyed both statistical significance and large effect sizes.

Table 11 Measures of association between posttest of ECDF and pretest

The results of the ANOVA test (F (1, 23) = 112.84, p = .001) (Table 12) indicated that the statistical assumption that there was not a linear relationship between the posttest of ECDF and the first posttest was met.

Table 12 Test of linear relationship between Posttest of ECDF and first posttest

Table 13 displays the effect size indices for the test of linearity discussed in Table 12 above. The results (r = .884, Eta = .917, and Eta squared = .841) all indicated large effect sizes. Thus , it can be concluded that the test of linearity enjoyed both statistical significance and large effect sizes.

Table 13 Measures of association between posttest of ECDF and first posttest

Based on the results displayed in Table 14, it can be concluded that the participants after receiving ECDF had a higher mean on the third posttest (M = 45, SE = .23) than second posttest (M = 29.10, SE = .193) controlling for the effect of pretest and first posttest.

Table 14 Descriptive statistics; posttests of ECDF and posttest of E-portfolio with covariates

The results of the repeated measures ANCOVA (F (1, 27) = 4.90, p = .035, partial η2 = .154 representing a large effect size) (Table 15) indicated that the participants after receiving ECDF had a significantly higher mean on the third posttest than second posttest after controlling for the effect of pretest and first posttest.

Table 15 Repeated measures ANCOVA; posttests of ECDF and posttest of E-portfolio with covariates

Discussion

This study was an investigation of the effect of online summative and formative assessment on teacher competences. Everything being equal in terms of teacher in class induction of professional teaching videos both in terms of content and procedure, significant differences were found in teacher competence improvements from online summative assessment towards online formative assessment through two techniques of e-portfolio writing and electronic collaborative discussion forum (ECDF) with ECDF having highest impact.

The results of this study corroborate a number of other studies as far as teacher portfolio writing was concerned, for example, the effect of video portfolio on pre-service teachers’ development of coaching competence (Bakker et al. 2011). The difference lies in the utilization of the instruments. In this study, electronic written portfolio was investigated to find its effect of teacher institutional, technical, clinical, and personal competences. In addition, e-portfolios helped teachers’ transformation from paper-based assessment of their classes to formative assessment practices which instigated deeper thinking and reflection and in turn learning (Hooker 2015). The results of this study also concord those of the study by Kabilan and Khan (2012). In their study, they approved the effect of e-portfolio writing on six teacher competences including understanding of effective teacher role, developing teaching activities, improving linguistic abilities, comprehending content knowledge, gaining ITC skills, and realization of the need for change in the mindset. The results of this study are also supported with the literature review of Lam (2017). His literature review of portfolio assessment confirmed the efficacy of classroom application of learning supportive portfolio assessment and pertinent professional learning and development. Chang et al. (2013) investigation of web-based portfolio assessment with the purpose of fostering self-regulation indicated that the technology enhanced learning and interactive digital learning environments are helpful in students’ regulating their own learning and boost of learning results. However, the participants they had in their study include students and the instruments they used were web-based portfolio assessment app and self- regulation questionnaire, whereas in this study, teachers were influenced through web-based intervention and performance-based tests were used.

The results of the present research are also supported in terms of collaboration dimension practiced in ECDF. For example, Lee and Brett (2015) study of dialogic teacher–teacher discussion indicated that the dialogic discussion led to teacher transformative learning which supports technology use. The analysis of essential features of dialogic discussion supported developing new online dialogic discussion tools. In addition, the results are supported by Burhan-Horasanlı and Ortaçtepe (2016). Their examination nine in-service EFL teachers’ reflective discussion indicated that online discussion is a potential platform for teacher community practice which encourages reflection on, in, and for action. As was the case in this study, reflection on actions in ECDF helped teachers improved their teaching act and improved their teaching competence. The reflection occurred in teacher collaborations in ECDF which helped both teacher and students influence their higher cognitive abilities is supported also by the research on the effect of computer-supported collaborative learning environments on critical thinking ability (Lin et al. 2016). Reflection that exists in collaborative work helped the teachers received support through scaffolding in the interaction in collaborative work. The reflection through interaction and scaffolding help teachers move from intramental to intermental status which means that teachers could do what they could not do with support and scaffolding of others (Peercy and Troyan 2017). The positive role of reflection and self-regulation in collaboration was also supported as reflection in collaboration results in positive socioemotional interactions and group regulatory behaviors (Kwon et al. 2014). To sum up, teacher change from mere delivery of teaching to learners to a more “learner focused teaching” (Richards 2010) which maximizes learning opportunities is supported in this study.

Despite the interesting findings of this study, it is limited on various grounds. First to mention is a prior investigation of teacher readiness to use and acceptance of it. As Teo (2015) indicated through 23 item of self-report on seven-point scale, pre-service, and in-service teachers recognized facilitating conditions and technology complexity as the main factors affecting efficacy of technology use. Among many challenges in the use of technology by pre-service teachers such as negative attitudes of participants, time constrains and ethical issues, interrupted internet connection is the preliminary one (Kabilan and Khan 2012). The haphazard infrastructure facilities such as internet speed and quality may put some users at advantages of others (Rabiee et al. 2013). Therefore, policy makers need to make principled decision when it comes to technology use. This requires funds on providing necessary infrastructure. Otherwise, a digital divide will be made between those who can afford high internet quality and the other counterpart. Besides, teacher acceptance and readiness are very much dependent on the process of teacher technology competence development (Hung 2016) which requires teacher induction programs of any type.

Despite the contribution this study made, it might be affected by the sources of errors the researcher failed to control for. For example, not all teachers spent the same time on assessment tasks. The variability in time on task might have affected amount of teacher discussion and in turn reflection on action. This might indirectly have affected the amount of support and scaffolding teachers received from other colleagues. Individual accountability is another important issues that most of collaboration studies including the present study. Thus, all online collaborative platforms should be designed in such a way that not only record collaborative work but also individual learning and accountability (Kent et al. 2016). In addition, since research on collaboration can indicate the quality of individual learning, further research can show how computer-aided interaction and collaboration can increase individual learning (Yücel and Usluel 2016).

In addition, how the teacher and teaching act are influenced by computers depend on how computers are integrated into education. This requires policy makers attend the emotional leader ship especially in educational contexts where teacher role is central to education (Chua and Chua 2017) and infrastructure preparation for implementation (Gil-Flores et al. 2017). Besides, this research was quantitative in nature. Mere counts of evidences of learning and teaching incidences may mask other important qualitative accounts such as teacher and student perspectives on learning and teaching experiences they have in educational programs.

Conclusion

This study investigated the effect of less integrated online summative and more integrated online formative assessment through e-portfolio writing and electronic collaborative discussion forum (ECDF) on teacher competences. The results indicated that although all assessment interventions improved teacher competences, ECDF was more effective assessment intervention in comparison with the other two. This study has several practical implications to teaching practitioners and instructional outcomes.

The discussion if seen as formative assessment venue in which teacher learn how to make professional decisions about how to intervene to better respond to learner problems. What makes this sort of discussion feasible and the most accessible is online platforms that facilitate teacher engagement which requires engaging technological tools and appropriate assessment technique (Sheard and Chambers 2014). One problem with online assessment platforms is their inefficacy in basing the group working on distinct responsibilities and inefficacy in group works in collaborative and in parallel way (Lucas et al. 2017). Therefore, they are more like automated feedback provider.

Teacher education is a developmental journey which equips teachers with skills and experiences to better serve their students. The literature has indicated that reflection in the heart of practice can help teachers make principled decisions about where to start, where to go and how to reach the destination (Gan and Lee 2016). However, teachers need to acknowledge the fact that development is not a linear one-sided direction rather its multidimensional non-linear requiring advancement from all perspectives including technical, personal, administrative, and learning and growth ones which occurs as a result of ongoing assessment, evaluation and regulation of the self.