Test Info

Testing Information

PowerPoint Presentation by John Willis and Ron Dumont on Using  Test Scores to Document Progress  (Added on July 28, 2016) *


A Year’s Growth             in a Year’s Time?

Track Meet Analogy Ron’s version


Testing Information (Very General) *

Why are full scale or total scores on tests of intelligence sometimes lower (or higher) than any subscale scores OR Why is the whole not the sum of its parts?

Ron and John offered the following explanation in the paper below.

For a more specific discussion of WJ IV GIA scores sometimes appear higher or lower than their composties, click on the link below to Kevin McGrew’s website.
Why GIA scores sometimes appear higher or lower

And for a simple explanation about why IQ scores are not good predictors of achievement, see John Willis’s
How Can a Person’s Reading Score be Higher than Their IQ  (7/16/2016)  (This brief article was written in partial refutation of a prosecutor’s unusual argument that if someone could read, he or she could not be intellectually disabled. )

Errors on Cognitive Assessments Administered by Graduate Students and Practicing School Psychologists *

Scoring Errors  (Introduction by John Willis).  I recently made minor contributions to an article being submitted for publication: “Wechsler Administration and Scoring Errors Made by Graduate Students and School Psychologists” by Erika Rodger and Ron Dumont.  Dr. Rodger had the opportunity, working as a teaching assistant in graduate assessment courses over several years, to review a whole raft of WISCs and WAISs inflicted on unsuspecting victims by master’s and doctoral candidates, and she managed to collect a bunch of Wechsler scales administered in real life by practicing psychologists.  Her detailed, carefully analyzed, and thoughtfully and clearly discussed findings are not cause for optimism.  It is for us, the evaluators, to be dedicated to the unfinished work of administering and scoring tests accurately.  It is for us to be here dedicated to the great task remaining before us of reading directions and items exactly as written in the manual, of recording all responses verbatim, of using the manual to score items correctly, of recording elapsed times and adhering to time limits, of awarding bonus points correctly, of performing simple arithmetic accurately, of looking up and recording scores accurately, using straightedges as needed, of verifying that we entered raw scores correctly in computerized scoring programs, and of copying scores correctly into our reports.  It is for us, the evaluators, to take increased devotion to the cause of accurate testing and reporting so that our examinees shall not have been tested in vain.  My take on some of Dr. Rodgers’s data was that experienced examiners sometimes seem to think that their personal judgment is more valid that the normative procedures.  We must do better.   (Additional bibliography followsl) 
Erika Rodger, a student of Ron Dumont, completed a dissertation documenting the kinds of errors by both experienced school psychologists and graduate students.  In her Introduction to the dissertation, Erica writes, 

Cognitive assessments are prevalent in U.S. history and policy, and are still very widely used for a variety of purposes. Individuals are trained on the administration and interpretation of these assessments, and upon completion of a program it should be assumed that they are able to complete an assessment without making administrative, scoring, or recording errors. However,
an examination of assessment protocols completed by students as well as practicing school psychologists reveals that errors are the norm, not the exception. The purpose of this study was to examine errors committed by both master’s and doctoral-level students on three series of cognitive assessments as well as errors made by practicing school psychologists.

To read her dissertation, click on:
Alfonso, V., Johnson, A., Patinella, L., & Rader, D. (1998). Common WISC-III examiner errors: Evidence from graduate students in training. Psychology in the Schools, 35, 119-125.
Belk, M., LoBello, S., Ray, G., & Zachar, P. (2002). WISC-III administration, clerical, and scoring errors made by student examiners. Journal of Psychoeducational Assessment, 20, 290-300.
Brazelton, E., Jackson, R., Buckhalt, J., Shapiro, S., & Byrd, D. (2003). Scoring errors on the WISC-III: A study across levels of education, degree fields, and current professional positions. Professional Educator, 25(2), 1-8.
Conner, R., & Woodall, R. E. (1983). The effects of experience and structured feedback on WISC-R error rates made by student examiners. Psychology in the Schools, 20, 376-379.
Egan, P., McCabe, P., Semenchuk, D., & Butler, J. (2003). Using portfolios to teach test scoring skills: A preliminary investigation. Teaching of Psychology, 30(3), 233-235.
Erdodi, L., Richard, D., & Hopwood, C. (2009). The importance of relying on the manual: Scoring error variance in the WISC-IV vocabulary subtest. Journal of Psychoeducational Assessment, 27, 374-385.
Lee, D., Reynolds, C. R., & Willson, V. L. (2003). Standardized test administration: Why bother? Journal of Forensic Neuropsychology, 3, 55-81. doi:10.1300/J151v03n03_04
Loe, S., Kadlubek, R., & Marks, W. (2007). Administration and scoring errors on the WISC-IV among graduate student examiners. Journal of Psychoeducational Assessment, 25, 237-247.
Patterson, M., Slate, J., Jones, C., & Steger, H. (1995). The effects of practice administrations in learning to administer and score the WAIS-R: A partial replication. Educational and Psychological Measurement, 55, 32-37.
Sherrets, S., Gard, G., & Langner, H. (1979). Frequency of clerical errors on WISC protocols. Psychology in the Schools, 16(4), 495-496.
Slate, J. R., & Chick, D. (1989). WISC-R examiner errors: Cause for concern. Psychology in the Schools, 26, 74-84.
Slate, J. R., & Jones, C. H. (1990). Student error in administering the WISC-R: Identifying problem areas. Measurement & Evaluation in Counseling & Development, 23, 137-140.
Slate, J. R., & Jones, C. H. (1993). Evidence that practitioners err in administering and scoring the WAIS-R. Measurement & Evaluation in Counseling & Development, 20(4),156-162.
Slate, J. R., Jones, C. H., Coulter, C., & Covert, T. L. (1992). Practitioners’ administration and scoring of the WISC-R: Evidence that we do err. Journal of School Psychology, 30, 77-82.
Slate, J. R., Jones, C. H., Murray, R. A., & Coulter, C. (1993). Evidence that practitioners err in administering and scoring the WAIS-R. Measurement and Evaluation in Counseling and Development, 25, 156-161.
Warren, S. A., & Brown, W. G. (1972). Examiner scoring errors on individual intelligence tests. Psychology in the Schools, 9, 118-122

Miscellaneous Topics includes previously published articles  on a variety of related subjects including ADHD, child sexual abuse and other school related subjects such corporal punishment,  grade level retention, school psychologist ratios *

Miscellaneous Topics

Wechsler Individual Achievement Test II.  Thirteen, more or less, comparisons, charts, and mini articles.   Click on link below for complete table of contents. *

http://www.myschoolpsychology.com/WIAT II.pdf

Reports and Critiques *

Reports and Critiques

Test Reviews

Woodcock Johnson III *


Dumont Willis Extra Easy Evaluation Battery *


Ten Top Problems with Normed Tests for Very Young Children *

Ten Top Problems

Test Score Descriptions *


Relating Assessment Results to Accommodations.  Republished guidance from the NICHCY website (closing down on 9/30/2014). *


Scoring Errors Necessitate Double Checking (John Willis) *

Scoring Errors Necessitate Double Checking Protocols

Partial Bibliography of Test Reviews by Ron Dumont, John Willis, Kate Viezel, and Jamie Zibulsky *

Encyclopedia of Special Education Encyclopedia of Special Education 2