Skip to main content

Part of the book series: Statistics for Social and Behavioral Sciences ((SSBS))

  • 368 Accesses

Abstract

A computer-based testing program has numerous psychometric needs, and it can be easy to get so involved with these psychometric details that other essential points of view are lost. It is important to remember that every time an exam is given there is a person on the other end of the computer screen and to consider the reality of the testing experience for the examinee. Various steps can be taken to reduce examinee stress and make the experience more pleasant (or at least less unpleasant). Hopefully, these steps will reduce confounding variance and help produce a test that is as fair as possible. Those testing programs that are elective (i.e., that examinees are not required to take) have a particular need to ensure that the computerized testing experience is not onerous. There is little benefit in developing a wonderful test that no one chooses to take. This chapter will briefly address some computerized testing issues from the examinees’ point of view.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Association of Test Publishers (ATP). (2000). Computer-Based Testing Guidelines.

    Google Scholar 

  • Bennett, R. E., & Bejar, I. I. (1998). Validity and automated scoring: It’s not only the scoring. Educational Measurement Issues and Practice, 17, 9–17.

    Article  Google Scholar 

  • Bunderson, V. C, Inouye, D. I., & Olsen, J. B. (1989). The four generations of computerized educational measurement. In Linn, R. (Ed.) Educational Measurement, 3rd edition. New York: American Council on Education and Macmillan Publishing Co.

    Google Scholar 

  • Gallagher, A., Bridgeman, B., & Cahalan, C. (1999, April). The Effect of CBT on Racial, Gender, and Language Groups. Paper presented at the annual meeting of the National Council on Measurement in Education, Montreal.

    Google Scholar 

  • Harmes, J. C, & Parshall, C. G. (2000, November). An Iterative Process for Computerized Test Development: Integrating Usability Methods. Paper presented at the annual meeting of the Florida Educational Research Association, Tallahassee.

    Google Scholar 

  • Kingsbury, G. G. (1996, April). Item Review and Adaptive Testing. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.

    Google Scholar 

  • Landauer, T. K. (1996). The Trouble with Computers: Usefulness, Usability, and Productivity. Cambridge, Mass: MIT Press.

    Google Scholar 

  • Lunz, M. E., & Bergstrom, B. A. (1994). An empirical study of computerized adaptive test administration conditions. Journal of Educational Measurement, 31, 251–263.

    Article  Google Scholar 

  • Lunz, M. E., Bergstrom, B. A., & Wright, B. D. (1992). The effect of review on student ability and test efficiency for computer adaptive testing. Applied Psychological Measurement, 16, 33–40.

    Article  Google Scholar 

  • Millman, J., & Greene, J. (1989). The specification and development of tests of achievement and ability. In Linn, R. (ed.) Educational Measurement, 3rd edition. New York: American Council on Education and Macmillan Publishing Co.

    Google Scholar 

  • Norman, D. A. (1990). The Design of Everyday Things. New York: Doubleday.

    Google Scholar 

  • O’Neill, K., & Kubiak, A. (1992, April). Lessons Learned from Examinees about Computer-Based Tests: Attitude Analyses. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco.

    Google Scholar 

  • O’Neill, K., & Powers, D. E. (1993, April). The Performance of Examinee Subgroups on a Computer-Administered Test of Basic Academic Skills. Paper presented at the annual meeting of the National Council on Measurement in Education, Atlanta.

    Google Scholar 

  • Perlman, M., Berger, K., & Tyler, L. (1993). An Application of Multimedia Software to Standardized Testing in Music. (Research Rep. No. 93-36) Princeton, NJ: Educational Testing Service.

    Google Scholar 

  • Pommerich, M., & Burden, T. (2000, April). From Simulation to Application: Examinees React to Computerized Testing. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans.

    Google Scholar 

  • Rosen, G. A. (2000, April). Computer-Based Testing: Test Site Security. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans.

    Google Scholar 

  • Stocking, M. L. (1997). Revising item responses in computerized adaptive tests: A comparison of three models. Applied Psychological Measurement, 21, 129–142.

    Article  Google Scholar 

  • Sutton, R. E. (1993, April). Equity Issues in High Stakes Computerized Testing. Paper presented at the annual meeting of the American Educational Research Association, Atlanta.

    Google Scholar 

  • Tullis, T. (1997). Screen Design. In Heiander, M., Landauer, T. K., & Prabhu, P. (eds). Handbook of Human-Computer Interaction, 2nd completely revised edition, (503–531). Amsterdam: Elsevier.

    Google Scholar 

  • Vispoel, W. P., Hendrickson, A. B., & Bleiler, T. (2000). Limiting answer review and change on computerized adaptive vocabulary tests: Psychometric and attitudinal results. Journal of Educational Measurement, 37, 21–38.

    Article  Google Scholar 

  • Vispoel, W. P., Rocklin, T. R., Wang, T., & Bleiler, T. (1999). Can examinees use a review option to obtain positively biased ability estimates on a computerized adaptive test? Journal of Educational Measurement, 36, 141–157.

    Article  Google Scholar 

  • Way, W. D. (1994). Psychometric Models for Computer-Based Licensure Testing. Paper presented at the annual meeting of CLEAR, Boston.

    Google Scholar 

  • Wise, S. (1996, April). A Critical Analysis of the Arguments for and Against Item Review in Computerized Adaptive Testing. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.

    Google Scholar 

  • Wise, S. (1997, April). Examinee Issues in CAT. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago.

    Google Scholar 

Additional Readings

  • Becker, H. J., & Sterling, C. W. (1987). Equity in school computer use: National data and neglected considerations. Journal of Educational Computing Research, 3, 289–311.

    Article  Google Scholar 

  • Burke, M. J., Normand, J., & Raju, N. S. (1987). Examinee attitudes toward computer-administered ability testing. Computers in Human Behavior, 3, 95–107.

    Article  Google Scholar 

  • Koch, B. R., & Patience, W. M. (1978). Student attitudes toward tailored testing. In D. J. Weiss (ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Minneapolis: University of Minnesota, Department of Psychology.

    Google Scholar 

  • Llabre, M. M., & Froman, T. W. (1987). Allocation of time to test items: A study of ethnic differences. Journal of Experimental Education, 55, 137–140.

    Google Scholar 

  • Moe, K. C, & Johnson, M. F. (1988). Participants’ reactions to computerized testing. Journal of Educational Computing Research, 4, 79–86.

    Article  Google Scholar 

  • Ward, T. J. Jr., Hooper, S. R., & Hannafin, K. M. (1989). The effects of computerized tests on the performance and attitudes of college students. Journal of Educational Computing Research, 5, 327–333.

    Article  Google Scholar 

  • Wise, S. L., Barnes, L. B., Harvey, A. L., & Plake, B. S. (1989). Effects of computer anxiety and computer experience on the computer-based achievement test performance of college students. Applied Measurement in Education, 2, 235–241.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer Science+Business Media New York

About this chapter

Cite this chapter

Parshall, C.G., Spray, J.A., Kalohn, J.C., Davey, T. (2002). Examinee Issues. In: Practical Considerations in Computer-Based Testing. Statistics for Social and Behavioral Sciences. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-0083-0_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4613-0083-0_3

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-0-387-98731-6

  • Online ISBN: 978-1-4613-0083-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics