Skip to main content

Part of the book series: Statistics for Social and Behavioral Sciences ((SSBS))

  • 375 Accesses

Abstract

The computerized fixed test (CFT) method is the test-delivery method that provides the most direct analogue to paper-and-pencil testing. This method administers a fixed-length, fixed-form computerized exam without any type of adaptive item selection. In some earlier literature, this test-delivery method is referred to as computer-based testing, or CBT. However, that term has gradually come to refer to any computer-administered exam. For this reason, and to emphasize the fixed nature of the exam, we will use CFT to identify this delivery method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Adair, J. H., & Berkowitz, N. F. (1999, April). Live application testing: Performance assessment with computer-based delivery. Paper presented at the annual meeting of the American Educational Research Association, Montreal.

    Google Scholar 

  • Crocker, L., & Algina, J. (1986). Introduction to Classical And Modern Test Theory. Fort Worth: Holt, Rinehart & Winston.

    Google Scholar 

  • Davey, T., & Thomas, L. (1996, April). Constructing adaptive tests to parallel conventional program. Paper presented at the annual meeting of the American Educational Research Association, New York.

    Google Scholar 

  • Kingsbury, G. G., & Zara, A. R. (1991). A comparison of procedures for contentsensitive item selection in computerized adaptive tests. Applied Measurement in Education, 4, 241–261.

    Article  Google Scholar 

  • Rosen, G. A. (2000, April). Computer-based testing: Test site security. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans.

    Google Scholar 

  • Shermis, M., & Averitt, J. (2000, April). Where did all the data go? Internet security for Web-based assessments. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans.

    Google Scholar 

  • Thissen, D. (1990). Reliability and measurement precision. In H. Wainer (ed.), Computer Adaptive Testing: A Primer, (pp. 161–186). Hillsdale, NJ: Lawrence Erlbaum.

    Google Scholar 

  • Way, W. D. (1998). Protecting the integrity of computerized testing item pools. Educational Measurement: Issues and Practice, 17, 17–27.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer Science+Business Media New York

About this chapter

Cite this chapter

Parshall, C.G., Spray, J.A., Kalohn, J.C., Davey, T. (2002). Computerized Fixed Tests. In: Practical Considerations in Computer-Based Testing. Statistics for Social and Behavioral Sciences. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-0083-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4613-0083-0_6

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-0-387-98731-6

  • Online ISBN: 978-1-4613-0083-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics