Abstract
A computer-based testing program has numerous psychometric needs, and it can be easy to get so involved with these psychometric details that other essential points of view are lost. It is important to remember that every time an exam is given there is a person on the other end of the computer screen and to consider the reality of the testing experience for the examinee. Various steps can be taken to reduce examinee stress and make the experience more pleasant (or at least less unpleasant). Hopefully, these steps will reduce confounding variance and help produce a test that is as fair as possible. Those testing programs that are elective (i.e., that examinees are not required to take) have a particular need to ensure that the computerized testing experience is not onerous. There is little benefit in developing a wonderful test that no one chooses to take. This chapter will briefly address some computerized testing issues from the examinees’ point of view.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Association of Test Publishers (ATP). (2000). Computer-Based Testing Guidelines.
Bennett, R. E., & Bejar, I. I. (1998). Validity and automated scoring: It’s not only the scoring. Educational Measurement Issues and Practice, 17, 9–17.
Bunderson, V. C, Inouye, D. I., & Olsen, J. B. (1989). The four generations of computerized educational measurement. In Linn, R. (Ed.) Educational Measurement, 3rd edition. New York: American Council on Education and Macmillan Publishing Co.
Gallagher, A., Bridgeman, B., & Cahalan, C. (1999, April). The Effect of CBT on Racial, Gender, and Language Groups. Paper presented at the annual meeting of the National Council on Measurement in Education, Montreal.
Harmes, J. C, & Parshall, C. G. (2000, November). An Iterative Process for Computerized Test Development: Integrating Usability Methods. Paper presented at the annual meeting of the Florida Educational Research Association, Tallahassee.
Kingsbury, G. G. (1996, April). Item Review and Adaptive Testing. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.
Landauer, T. K. (1996). The Trouble with Computers: Usefulness, Usability, and Productivity. Cambridge, Mass: MIT Press.
Lunz, M. E., & Bergstrom, B. A. (1994). An empirical study of computerized adaptive test administration conditions. Journal of Educational Measurement, 31, 251–263.
Lunz, M. E., Bergstrom, B. A., & Wright, B. D. (1992). The effect of review on student ability and test efficiency for computer adaptive testing. Applied Psychological Measurement, 16, 33–40.
Millman, J., & Greene, J. (1989). The specification and development of tests of achievement and ability. In Linn, R. (ed.) Educational Measurement, 3rd edition. New York: American Council on Education and Macmillan Publishing Co.
Norman, D. A. (1990). The Design of Everyday Things. New York: Doubleday.
O’Neill, K., & Kubiak, A. (1992, April). Lessons Learned from Examinees about Computer-Based Tests: Attitude Analyses. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco.
O’Neill, K., & Powers, D. E. (1993, April). The Performance of Examinee Subgroups on a Computer-Administered Test of Basic Academic Skills. Paper presented at the annual meeting of the National Council on Measurement in Education, Atlanta.
Perlman, M., Berger, K., & Tyler, L. (1993). An Application of Multimedia Software to Standardized Testing in Music. (Research Rep. No. 93-36) Princeton, NJ: Educational Testing Service.
Pommerich, M., & Burden, T. (2000, April). From Simulation to Application: Examinees React to Computerized Testing. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans.
Rosen, G. A. (2000, April). Computer-Based Testing: Test Site Security. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans.
Stocking, M. L. (1997). Revising item responses in computerized adaptive tests: A comparison of three models. Applied Psychological Measurement, 21, 129–142.
Sutton, R. E. (1993, April). Equity Issues in High Stakes Computerized Testing. Paper presented at the annual meeting of the American Educational Research Association, Atlanta.
Tullis, T. (1997). Screen Design. In Heiander, M., Landauer, T. K., & Prabhu, P. (eds). Handbook of Human-Computer Interaction, 2nd completely revised edition, (503–531). Amsterdam: Elsevier.
Vispoel, W. P., Hendrickson, A. B., & Bleiler, T. (2000). Limiting answer review and change on computerized adaptive vocabulary tests: Psychometric and attitudinal results. Journal of Educational Measurement, 37, 21–38.
Vispoel, W. P., Rocklin, T. R., Wang, T., & Bleiler, T. (1999). Can examinees use a review option to obtain positively biased ability estimates on a computerized adaptive test? Journal of Educational Measurement, 36, 141–157.
Way, W. D. (1994). Psychometric Models for Computer-Based Licensure Testing. Paper presented at the annual meeting of CLEAR, Boston.
Wise, S. (1996, April). A Critical Analysis of the Arguments for and Against Item Review in Computerized Adaptive Testing. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.
Wise, S. (1997, April). Examinee Issues in CAT. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago.
Additional Readings
Becker, H. J., & Sterling, C. W. (1987). Equity in school computer use: National data and neglected considerations. Journal of Educational Computing Research, 3, 289–311.
Burke, M. J., Normand, J., & Raju, N. S. (1987). Examinee attitudes toward computer-administered ability testing. Computers in Human Behavior, 3, 95–107.
Koch, B. R., & Patience, W. M. (1978). Student attitudes toward tailored testing. In D. J. Weiss (ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Minneapolis: University of Minnesota, Department of Psychology.
Llabre, M. M., & Froman, T. W. (1987). Allocation of time to test items: A study of ethnic differences. Journal of Experimental Education, 55, 137–140.
Moe, K. C, & Johnson, M. F. (1988). Participants’ reactions to computerized testing. Journal of Educational Computing Research, 4, 79–86.
Ward, T. J. Jr., Hooper, S. R., & Hannafin, K. M. (1989). The effects of computerized tests on the performance and attitudes of college students. Journal of Educational Computing Research, 5, 327–333.
Wise, S. L., Barnes, L. B., Harvey, A. L., & Plake, B. S. (1989). Effects of computer anxiety and computer experience on the computer-based achievement test performance of college students. Applied Measurement in Education, 2, 235–241.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer Science+Business Media New York
About this chapter
Cite this chapter
Parshall, C.G., Spray, J.A., Kalohn, J.C., Davey, T. (2002). Examinee Issues. In: Practical Considerations in Computer-Based Testing. Statistics for Social and Behavioral Sciences. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-0083-0_3
Download citation
DOI: https://doi.org/10.1007/978-1-4613-0083-0_3
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-98731-6
Online ISBN: 978-1-4613-0083-0
eBook Packages: Springer Book Archive