Abstract
This paper reports on the design and evaluation of a rubric for assessing discussions in online graduate level education courses. The aims of the research were twofold. The first goal was to develop a discussion rubric that provides guidance to graduate students participating in online courses that are heavily discussion based. The second goal was to evaluate the efficacy of the rubric in student comfort with the expectations for the course and the quality of the discussions. Qualitative and quantitative course evaluations from four sections of an online graduate course are utilized to report on the effectiveness of the rubric upon implementation. Student average grades also show a statistically significant increase with the implementation of the rubric.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Allen, E., & Seaman, J. (2008). Staying the course: Online education in the United States. Retrieved online, August 2, 2012 from: http://sloanconsortium.org/publications/survey/staying_course
Boud, D. (1995). Enhancing learning through self assessment. London, England: Kogan Page.
Comeaux, P. (2005). Assessing students’ online learning: Strategies and resources. In E. Chandler (Ed.) Essays on teaching excellence, 17(3). Retrieved online, August 3, 2012 from: https://secure.dotinchosting.com/podnet-work/publications/teachingexcellence/05-06/V17,%20N3%20Comeaux.pdf
George, D. and Mallery, P. (2006). SPSS for Windows step by step. Boston: Pearson.
Jara, M. & Mellar, H. (2010). Quality enhancement for Elearning courses: The role of student feedback. Computers and Education, 54(3), 709-714.
Lapadat, J. (2002). Written interaction: A key component in online learning. Journal of Computer-Mediated Communication, 7(4), p. 0. Retrieved online July 15, 2012 from: http://onlinelibrary.wiley.com/doi/10.im/j.1083-6101.2002.tb00158.x/full
Martinez-Arguelles, M., Castan, J., & Juan, A. (2010). Using the Critical Incident Technique to identify factors of service quality in online higher education. Information Systems in the Service Sector, 2(4), 57 - 72.
Moallem, M. (2005). Designing and managing student assessment in online learning environment. In P. Comeaux (Ed.). Assessing online learning (pp. 18-34). Bolton, MA: Anker.
Reeves, T. (2010). Alternative assessment approaches for online learning environments in higher education. Journal of Educational Computing Research, 23(1), 101 - 111.
Salkind, N. (2008). Statistics for people who think they hate statistics. Los Angeles: Sage Publications.
Savenye, W (2004). Alternatives for assessing approaches for online learning environments in higher education. Distance Learning, 1(1), 29-35.
Stiggins, R., Arter, J., Chappuis, J., & Chappuis, S. (2006) Classroom assessment for student learning: do it right—using it well. Portland, OR: Educational Testing Services.
Yang, D., Richardson, J., French, B., & Lehman, J. (2011). The development of a content analysis model for assessing students’ cognitive learning in asynchronous online discussions. Educational Technology Research and Development, 59(1), 43-70.
Author information
Authors and Affiliations
Corresponding author
Additional information
Note: The authors would like to thank, Dr. Iris M. Striedieck, who contributed to the development of the original materials from which the rubric used in this study was developed.
Rights and permissions
About this article
Cite this article
Wyss, V.L., Freedman, D. & Siebert, C.J. The Development of a Discussion Rubric for Online Courses: Standardizing Expectations of Graduate Students in Online Scholarly Discussions. TECHTRENDS TECH TRENDS 58, 99–107 (2014). https://doi.org/10.1007/s11528-014-0741-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11528-014-0741-x