Abstract
The processes of test administration and development are both critical elements in any testing program. Chronologically, the development of any test occurs before its administration, and thus the two are more commonly paired as “test development and administration.” However, in discussing computerized testing programs, it is often useful to address the administration issues first and then turn to the development considerations. This is the approach followed in this chapter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME). (1985). Standards for educational and psychological testing. Washington, DC: APA.
American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME). (1999). Standards for educational and psychological testing. Washington, DC: AERA.
American Psychological Association Committee on Professional Standards and Committee on Psychological Tests and Assessment. (APA). (1986). Guidelines for computer-based tests and interpretations. Washington, DC: Author
Association of Test Publishers (ATP). (2000). Computer-Based Testing Guidelines.
Clauser, B. E., & Schuwirth, L. W. T. (in press). The use of computers in assessment. In G. Norman, C. van der Vleuten, & D. Newble (Eds.), The International Handbook for Research in Medical Education. Boston: Kluwer Publishing.
Colton, G. D. (1997). High-tech approaches to breaching examination security. Paper presented at the annual meeting of NCME, Chicago.
Crocker, L. & Algina, J. (1986). Introduction to Classical and Modern Test Theory. Ft. Worth: Holt, Rinehart & Winston.
Godwin, J. (1999, April). Designing the ACT ESL Listening Test. Paper presented at the annual meeting of the National Council on Measurement in Education, Montreal, Canada.
Green, B. F., Bock R. D., Humphreys, L. G., Linn, R. L., & Reckase, M. D. (1984). Technical guidelines for assessing computerized adaptive tests. Journal of Educational Measurement, 21, 347–360.
Hambleton, R. K., & Jones, R. W. (1994). Item parameter estimation errors and their influence on test information functions. Applied Measurement in Education, 7, 171–186.
Mazzeo, J., & Harvey, A. L. (1988). The equivalence of scores from automated and conventional educational and psychological tests: A review of the literature (College Board Rep. No. 88-8, ETS RR No. 88-21). Princeton, NJ: Educational Testing Service.
Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 9, 287–304.
NCME Software Committee. (2000). Report of NCME Ad Hoc Committee on Software Issues in Educational Measurement. Available online: http://www.b-a-h.com/ncmesoft/report.html.
O’Neal, C. W. (1998). Surreptitious audio surveillance: The unknown danger to law enforcement. FBI Law Enforcement Bulletin, 67, 10–13.
Parshall, C. G. (In press). Item development and pretesting. In C. Mills (Ed.) Computer-Based Testing. Lawrence Erlbaum.
Pommerich, M., & Burden, T. (2000). From simulation to application: Examinees react to computerized testing. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans.
Rosen, G.A. (2000, April). Computer-based testing: Test site security. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans.
Shermis, M., & Averitt, J. (2000, April). Where did all the data go? Internet security for Web-based assessments. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans.
Vale, C. D. (1995). Computerized testing in licensure. In J. C. Impara (Ed.) Licensure Testing: Purposes, Procedures, and Practices. Lincoln, NE: Büros Institute of Mental Measurement.
Wainer, H. (Ed.) (1990). Computerized Adaptive Testing: A Primer. Hillsdale, NJ: Lawrence Erlbaum.
Wang, T., & Kolen, M. J. (1997, March). Evaluating comparability in computerized adaptive testing: A theoretical framework. Paper presented at the annual meeting of the American Educational Research Association, Chicago.
Way, W. D. (1998). Protecting the integrity of computerized testing item pools. Educational Measurement: Issues and Practice, 17, 17–27.
Additional Readings
Bugbee, A. C, & Bemt, F. M. (1990). Testing by computer: Findings in six years of use. Journal of Research on Computing in Education, 23, 87–100.
Buhr, D. C, & Legg, S. M. (1989). Development of an Adaptive Test Version of the College Level Academic Skills Test. (Institute for Student Assessment and Evaluation, Contract No. 88012704). Gainesville, FL: University of Florida.
Bunderson, C. V., Inouye, D. K., & Olsen, J. B. (1989). The four generations of computerized educational measurement. In R. L. Linn (Ed.), Educational Measurement (3rd ed., pp. 367–408). New York: Macmillan.
Eaves, R. C, & Smith, E. (1986). The effect of media and amount of microcomputer experience on examination scores. Journal of Experimental Education, 55, 23–26.
Eignor, D. R. (1993, April). Deriving Comparable Scores for Computer Adaptive and Conventional Tests: An Example Using the SAT. Paper presented at the annual meeting of the National Council on Measurement in Education, Atlanta.
Greaud, V. A., & Green, B. F. (1986). Equivalence of conventional and computer presentation of speed tests. Applied Psychological Measurement, 10, 23–34.
Green, B. F., Bock, R. D., Humphreys, L. G., Linn, R. L., & Reckase, M. D. (1984). Technical guidelines for assessing computerized adaptive tests. Journal of Educational Measurement, 21, 347–360.
Haynie, K. A., & Way, W. D. (1995, April). An Investigation of Item Calibration Procedures for a Computerized Licensure Examination. Paper presented at symposium entitled Computer Adaptive Testing, at the annual meeting of NCME, San Francisco.
Heppner, F. H., Anderson, J. G. T., Farstrup, A. E., & Weiderman, N. H. (1985). Reading performance on a standardized test is better from print than from computer display. Journal of Reading, 28, 321–325.
Hoffman, K. I., & Lundberg, G. D. (1976). A comparison of computer-monitored group tests with paper-and-pencil tests. Educational and Psychological Measurement, 36, 791–809.
Keene, S., & Davey, B. (1987). Effects of computer-presented text on LD adolescents’ reading behaviors. Learning Disability Quarterly, 10, 283–290.
Lee, J. A. (1986). The effects of past computer experience on computerized aptitude test performance. Educational and Psychological Measurement, 46, 721–733.
Lee, J. A., Moreno, K. E., & Sympson, J. B. (1986). The effects of mode of test administration on test performance. Educational and Psychological Measurement, 46, 467–173.
Legg, S. M., & Buhr, D. C. (1990). Investigating Differences in Mean Scores on Adaptive and Paper and Pencil Versions of the College Level Academic Skills Reading Test. Presented at the annual meeting of the National Council on Measurement in Education.
Linn, R. L. (Ed.). The four generations of computerized educational measurement. Educational Measurement, 3rd ed., pp. 367–408, NY: MacMillan.
Llabre, M. M., & Froman, T. W. (1987). Allocation of time to test items: A study of ethnic differences. Journal of Experimental Education, 55, 137–140.
Mason, G. E. (1987). The relationship between computer technology and the reading process: Match or misfit? Computers in the Schools, 4, 15–23.
Mills, C. (1994, April). The Introduction and Comparability of the Computer Adaptive GRE General Test. Symposium presented at the annual meeting of the National Council on Measurement in Education, New Orleans.
Olsen, J. B., Maynes, D. D., Slawson, D., & Ho, K. (1989). Comparisons of paper-administered, computer-administered and computerized adaptive achievement tests. Journal of Educational Computing Research, 5, 311–326.
Parshall, C. G., & Kromrey, J. D. (1993, April). Computer testing vs. Paper and pencil testing: an analysis of examinee characteristics associated with mode effect. Paper presented at the annual meeting of the American Educational Research Association, Atlanta.
Raffeld, P. C, Checketts, K., & Mazzeo, J. (1990). Equating Scores from Computer-Based and Paper-Pencil Versions of College Level English and Mathematics Achievement Tests. Presented at the annual meeting of the National Council on Measurement in Education.
Sachar, J. D., & Fletcher, J. D. (1978). Administering paper-and-pencil tests by computer, or the medium is not always the message. In D. J. Weiss (Ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Wayzata, MN: University of Minnesota.
Stocking, M. L. (1988). Scale Drift in On-Line Calibration. (Report No. 88-28-ONR). Princeton, NJ: Educational Testing Service.
Sykes, R. C, & Fitzpatrick, A. R. (1992). The stability of IRT b values. Journal of Educational Measurement, 29, 201–211.
Wise, S. L., & Plake, B. S. (1989). Research on the effects of administering tests via computers. Educational Measurement: Issues and Practice, 3, 5–10.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer Science+Business Media New York
About this chapter
Cite this chapter
Parshall, C.G., Spray, J.A., Kalohn, J.C., Davey, T. (2002). Issues in Test Administration and Development. In: Practical Considerations in Computer-Based Testing. Statistics for Social and Behavioral Sciences. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-0083-0_2
Download citation
DOI: https://doi.org/10.1007/978-1-4613-0083-0_2
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-98731-6
Online ISBN: 978-1-4613-0083-0
eBook Packages: Springer Book Archive