Abstract
Student ratings of instruction is one of the most widely used methods of measuring teaching effectiveness in post-secondary education (Abrami, d’Apollonia, and Cohen 1990). Wilson and Wilson (1977), using a student evaluation of instruction form, commonly known as the Student Instructional Rating System (SIRS Research Report #2, 1971), confirmed the existence of five predominate factors: (1) Course Organization, (2) Course Demands, (3) Student-Instructor Interaction, (4) Instructor Involvement, and (5) Student Interest. Upon closer inspection, these five factors appear to represent both inputs and outcomes of the teaching process. However, past studies have not examined these from a causal-effect perspective. This study applies structural equation modeling, a statistical technique frequently used by marketing academicians, to a widely used student rating of instruction form for evidence of nomological validity. Instructor Involvement and Student Interest are being treated as outcomes of the teaching process with reciprocal causal relationship between them with the remaining three factors capturing inputs in the teaching process. The causal linkages correspond to the hypotheses tested in this study. The causal model is tested using data from routine teaching evaluations of one instructor at a mid-sized midwestern university. The scale items (of student evaluation form based on SIRS) were tested for internal consistency, discriminate and convergent validity. The factors then were tested for causal relationships, and the fit indices, collectively, indicate a satisfactory model fit. The resulting nomological relationships between the input and the purported outcome variables, present an interesting scenario. Of the three input variables only one, Course Organization, has a consistent significant effect on both Instructor Involvement and Student Interest. A plausible explanation could lie in the inability of our students to handle uncertain situations, framing of the questions, and the transient nature of student-teacher interaction at our universities. There is a need to replicate this study to a larger sample in terms of instructor in order to gain a better understanding of a topic as vital and sometimes controversial as student rating of instruction.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Abrami, P. C, S. d’Apollonia, and P. A. Cohen. 1990. "Validity of Student Ratings of Instruction: What We Know and What We Do Not." Journal of Educational Psychology 82.2: 219–231.
Wilson, T. C. and P. A. Wilson. 1977. "Difference in Student Evaluations from Business vs. Other Colleges." Proceedings of Southern Marketing Association: 277–279.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 The Academy of Marketing Science
About this paper
Cite this paper
Paswan, A.K., Young, J.A. (2015). Student Ratings of Instruction: a Causal Analysis of Process Variables. In: Wilson, E.J., Hair, J.F. (eds) Proceedings of the 1996 Academy of Marketing Science (AMS) Annual Conference. Developments in Marketing Science: Proceedings of the Academy of Marketing Science. Springer, Cham. https://doi.org/10.1007/978-3-319-13144-3_83
Download citation
DOI: https://doi.org/10.1007/978-3-319-13144-3_83
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-13143-6
Online ISBN: 978-3-319-13144-3
eBook Packages: Business and EconomicsBusiness and Management (R0)