Archived

The articles and documents below range from ten to more than 40 years old.   They are included here partly  for historical purposes but also, while the statistics and test specific information are outdated, some of the discussion and background information remains relevant even today.

Archived Documents

Image result for P.L 94-142

Public Law 94-142 — Education of the Handicapped Act Updated 12/15/2018

Public Law 94-142 started it all back in 1975 although it wasn’t until 1977 that the US Department of Education issued its first final regulations implementing the Act.  The Act can be accessed in its entirety by clicking on the preceding link.  OSERS published its very first regulations for the EHA (now the IDEA) on August 23, 1977. A copy of the Federal Register including those regulations may be found by clicking on 1977 Final Regulations Implementing Part B of the Handicapped Act of 1975 

This is a searchable copy and readers may search for or scroll down to page 42474 to begin their perusal of this historic document.  Although subsequent amendments have tinkered with the language, adding substantive rights and clarifying others, the core principles have been there from the beginning.  

Related image

The Congressional definition of  a specific learning disability was about the same as it is today, despite some minor changes in wording. 

Congress had charged the Education Department with the responsibility for developing eligibility criteria for the SLD category.  The final   operational definition as a severe discrepancy between ability and achievement did not come until  several months later in the FEDERAL REGISTER, VOL. 42, NO. 250— THURSDAY, DECEMBER 29, 1977:     Their final  idea of eligibility criteria, however, was so broad that virtually every state had to come up with its own criteria.  Adding to the confusion over the next 40 plus years was OSEP’s assertion that those state criteria were just guidelines, not carrying the force of law., putting the ball back in the LEAs’ court.    

At first, however, the Department of Education (ED) had taken its responsibility more literally.   It was in the draft regulations offered in 1976 that ED expressed the notion that the identification of a specific learning disability should be based on a “50 percent discrepancy” and proposed that school systems nationwide use a Bond Tinker formula based on grade equivalents.   Both of these suggestions were deleted from the 1977 final Part B  regulations (link above) but still were incorporated into various state regulations.  New York, for example,  required schools to find a “50 percent discrepancy” as a criterion for LD eligibility, whereas many states adopted some version or another of the originally proposed Bond-Tinker formula.  ED’s original ideas in the draft regs for SLD criteria  (never carrying the force of law but establishing a historical precedent) can be found by using the “find” function to search for page 53404 or scrolling down to page 53404 of the .Federal Regulations for November 29, 1976

 

Ron Anderson’s Home Page — Evaluating LEP children

Ron Anderson was a school psychologist living in Las Vegas, Nevada.  One of his specialties was bilingual assessment.  On his Home Page, he provided readers with provocative suggestions on how to evaluate and interpret the evaluative results for bilingual students with a variety of social histories.  The research under-pinning some of his recommendations was never replicated to our knowledge and should not be considered definitive.  Also, the last update provided on-line by Mr. Anderson was in 2004 and his website went dark shortly after that.   At least one post dates back to 1989.  Still, all of the articles below were thought provoking and links to them  are re-posted here for our readers’ enjoyment.

OUTLINE OF THE ASSESSMENT PROCESS provides an overview of the recommended process to follow when assessing a student’s eligibility for Special Education, from the pre-referral stage to the IEP stage.
IDEA OdRAL LANGUAGE PROFICIENCY CHART OF SCORES is a chart which converts assigned “levels” to derived standard scores, in order to compare other tests to the language proficiency test score.
BILINGUAL IQ CONVERSION CHART is provided in order to understand how the “Bilingual” student scores on standardized tests of verbal and verbal related skills compared to monolingual peers.  (Editor’s Note;  This chart is available on the Measuring Intelligence page and scrolling down. GM))
QUANTIFYING EXCLUSIONARY FACTORS  shows the effects of these factors on scores obtained during assessment.  The Draft copy is available as of 9/19/2000
ORAL LANGUAGE PRE-REQUISITE SKILLS NECESSARY FOR SUCCESS AT GRADE LEVELS  This is new as of 4/19/01, a companion for use with possible Special education referrals is coming.

 

The DWI (1) and DWI (2)

Editor’s Note:  For the complete articles, click on the links in the headings below

USE OF THE TELLEGEN AND BRIGGS FORMULA TO DETERMINE THE
DUMONT-WILLIS INDEXES (DWI-1 & DWI-2) FOR THE WISC-IV

 

In this short paper, we provide two alternative composite scores, which are derived, respectively, from the three subtests that enter the VCI and the three subtests that enter the PRI and from the four subtests that enter the WMI and the PSI. We refer to these composites as the Dumont-Willis Indexes (DWIs) in order to distinguish them from the traditional ten-subtest Full Scale IQ, which includes both the six VCI and PRI subtests and the four subtests (i.e., Digit Span, Letter-Number Sequencing, Coding, and Symbol Search) that are not as highly correlated with verbal and non-verbal intelligence as are the six other Verbal and Perceptual subtests, and which load on independent factors in the four-factor solution of the WISC-IV. The Dumont-Willis Indexes separate the six subtests that are stronger measures of verbal and non-verbal intelligence from the other four subtests.

The DWI-1 score is a six-subtest composite that excludes subtests which load on the WMI and PSI.
The DWI-2 score is a four-subtest composite that includes subtests which load on the WMI and PSI

Using the DWI or GIA

Since Ron and I are allergic to the blind application of numerical discrepancy formulae to determine whether a student has a specific learning disability [9, 10], it was not our intent when Ron generated the DWI-1 andDWI-2 tables to generate new numbers to plug into such formulae. Rather, we hoped, following the lead of Kaufman, Prifitera, Soklofske, Tulsky, Wilkins, Weiss, and others [4, 5, 6, 7, 8] to assist evaluators in communicating their analyses of examinees’ WISC-IV scores. The concern with the inclusion of low-g “processing” scores in the FSIQ long predates the revisions made in the WISC-IV [1]. Even with the earliest Wechsler scales, some of us simply prorated scores for groups of tests [11] to avoid the “Mark Penalty” [9, p. 174; 12]. Weighting the contributions of (sub)tests by their g loading or correlation with the total score (the WJ III model, cited by one recent poster) does not eliminate the contribution of low-g, so-called “processing” (sub)tests. That procedure merely diminishes their contribution in proportion to their lack of relationship to the total score or g. Colin Elliott’s approach [13], rather than weighting subtests on the basis of g loadings, was to include in the total score for the DAS only subtests with relatively high g loadings and only three CHC factors: Gc, Gv, and Gf. However, even that approach does not eliminate the potential for the Mark Penalty [9, p.174; 12]. A student might, for example, have visual perception weaknesses that make an occupational therapist weep and depress scores on Gv tests dramatically and scores on some Gf tests moderately. For such a student, the total score would disguise both strengths and weaknesses.

Dumont/Salerno Severe Discrepancy Estimator

The 2006 Part B Final Regulations were explicit in requiring every school district within a state to apply the standards for identifying a child as SLD within their schools.  Some states like NC have used a straight discrepancy formula that didn’t rely upon or use discrepancy, and others, like Washington, had handy dandy charts that provided cutoffs based on an approximate discrepancy.    Additionally, as some states and many school systems have switched from an abilityachievement discrepancy model to an age/achievement discrepany model (a.ka. RTI), discrepancy formulas have become moot.    We’re preserving this simple Escel Estimator in large part because of the statistical discussion of regression provided by the late Hubert lovett.  That discussion may be accessed by clicking on the Dumont Salerno Severe Discrepancy Estimator

WISC IV Technical Report 4

The WISC IV is now outdated, the current version of this test being the WISC V.   In general use, the guidance and statistics in this report  replaced the  Dumont Willis Indexes.  While the stats are now outdated, the discussion of when and how the GAI might be used is still relevant. The authors wrote:

“This report provides information about the derivation and
uses of the General Ability Index (GAI). The GAI is a composite
score that is based on 3 Verbal Comprehension and 3
Perceptual Reasoning subtests, and does not include the
Working Memory or Processing Speed subtests included in the
Full Scale IQ (FSIQ)”

The complete 20 page copyrighted report may be downloaded by clicking on WISC IV Technical Report 4

 

Wechsler Intelligence Scale for Children IV
Frequently Asked Questions (copyright 2010 Pearson Education)

“Compared to the WISC–III, the WISC–IV FSIQ deemphasizes crystallized knowledge (Information is supplemental) and increases the contribution of fluid reasoning (Matrix Reasoning and Picture Concepts), working memory (Letter–Number Sequencing), and Processing Speed (both Coding and Symbol Search). The WISC–IV FSIQ is comprised of all 10 subtests that comprise the four index scores, including additional measures of working memory and processing speed. The WISC–III FSIQ included only one measure of processing speed and one measure of working memory in the FSIQ.”

Pearson’s FAQs for the IV may be accessed by clicking on WISC IV FAQs

Wechsler Intelligence Scale for Children IV  Supplemental Technical Manual

Editor’s Note:  The WISC IV has been replaced by the WISC V.   Ethically, psychologists are required in most instances to use the most recently normed version of an intelligence test within a year of publication.  APA Ethics (2002) stated:

Standard 9.08 of The Ethical Principles of Psychologists and Code of Conduct (APA 2002),
states the following:
(a) Psychologists do not base their assessment or intervention decisions or
recommendations on data or test results that are outdated for the current purpose.
(b) Psychologists do not base such decisions or recommendations on tests and measures
that are obsolete and not useful for the current purpose.

It is generally held that a test becomes “outdated” within one or two years of the revised test’s publication.

Readers may download this 90 page copyrighted supplemental manual by clicking on wisc IV technical manual

“This supplementary document provides the results of the special group studies with other
measures that were collected as part of the WISC–V standardization but not reported in the
WISC–V Technical and Interpretative Manual (Tech Manual). Results from these studies provide
practitioners additional information about the construct and ecological validity of the WISC–V
subtest, process, and composite scores. The Tech Manual provides information on the relation
between intellectual and cognitive abilities as measured by the WISC–V with other tests in
typically developing children. This information illustrates the cognitive skills associated with
developing academic skills, psychosocial development, and behavioral regulation. However, the
studies in the Tech Manual do not indicate how impairments in cognitive ability may impact
functioning in other psychosocial domains. The studies reported in this supplement provide some
information about the impact of cognitive deficits on aspects of academic performance, adaptive
functioning, and behavioral issues in children with known neurodevelopmental disorders.”

Woodcock Johnson III

This test was completely updated and revised in 2014.  Readers are referred to our page on the Woodcock Johnson IV for current information.  There was, however, an abubdance of research information available on the III, the discussions therein which may still have some relevance to current users of the WJ IV.  Let us start with some references closer to home . 

Dumont Willis  on the Woodcock Johnson III

Ron Dumont and John Willis provided the readers of their website with 65 pages of information regarding the WJ III on the  WJ III Page from Ron Dumont’s and John Willis’s now defunct website As they wrote, these are unofficial observations, none of which were reviewed by or approved by the test authors.    The subject matter is varied but the links in the informal table of contents are to subject headings on the page itself. Most of their material that was still relevant was moved from their website to this website in 2014 when their website was closed down.  Some material like this was not transferred because new tests had made the much of the information outdated but, as previously noted, much of the discussion continues to be relevant.  Hence its inclusion here.  (Click on previous link to view the content.)

 

Assessment Bulletins for the WJ III

The following were official publications offering additional information regarding the WJ III.  Note:  ASBs on the WJ III NU refer to a 2006 update of the WJ III norms that were based on a reanalysis of the original normative data that reflected the census bureau’s new estimates of the population distribution in 2005.  The test itself was not changed nor were the new norms based on new testing.

Assessment Bulletin No. 1 Comparative Features of the WJ III ® Tests of Cognitive Abilities and the Wechsler Intelligence Scales
Dawn Flanagan

“This document includes five tables that compare the WJ III Tests of Cognitive Abilities
(WJ III COG; Woodcock, McGrew, & Werder, 2001) to the Wechsler intelligence scales,
including the Wechsler Adult Intelligence Scale–Third Edition (WAIS-III; Wechsler,
1997a), the Wechsler Intelligence Scale for Children–Third Edition (WISC-III;
Wechsler, 1991), and the Wechsler Preschool and Primary Scale of Intelligence–Revised
(WPPSI-R; Wechsler 1989). These tables make comparisons along a number of different
dimensions, including content features (Table 1), administration features (Table 2),
interpretation features (Table 3) and technical features (Table 4). Table 5 presents the
broad and narrow cognitive abilities that underlie the WJ III COG and Wechsler batteries
according to the well-validated Cattell-Horn-Carroll theory of cognitive abilities (CHC
theory). This information may assist practitioners in test interpretation and provide insight
into variation in test performance.”

Assessment Bulletin No. 2 Technical Abstract
Erick LaForte, Kevin McGrew, Frederick Schrank

“The WJ III is based on current theory and research on the structure of human
cognitive abilities. The theoretical foundation of the WJ III is derived from the CattellHorn-Carroll
theory of cognitive abilities (CHC theory). Two major empirically derived
sources of research on the structure of human cognitive abilities informed the
development of the WJ III batteries.”

ASB No. 3  Use of the WJ III® Discrepancy Procedures for Learning Disabilities Identification and Diagnosis
Nancy Mather and Frederick Shrank

“Discrepancy scores obtained from the WJ III are actual discrepancies, not estimated
discrepancies, because the WJ III allows for direct comparisons of actual scores between
measures.1 These comparisons are not possible when scores are obtained from different
batteries (i.e., not co-normed). Because all norms for the WJ III COG and the WJ III ACH
are based on data from the same sample, examiners can report discrepancies between and
among an individual’s WJ III scores without using estimated discrepancies. The WJ III
discrepancy procedures are psychometrically preferable to estimated discrepancies for at
least two important reasons. First, the WJ III discrepancies do not contain the errors
associated with estimated discrepancies (estimated discrepancy procedures do not control
for unknown differences that exist when using two tests based on different norming
samples). Second, the discrepancy procedures used by the WJ III Compuscore® and Profiles
Program (Schrank & Woodcock, 2001) incorporate specific regression coefficients
between all predictor and criterion variables at each age level to provide the best estimates
of the population characteristics”

ASB No. 4 Calculating Ability/Achievement Discrepancies  Between the WISC III and the WJ  III Tests of Achievement
Frederick Shrank and Kirk Becker

“Ideally, ability/achievement discrepancy scores should be obtained using actual
discrepancy norms from co-normed cognitive and achievement tests, such as the
Woodcock-Johnson III Tests of Cognitive Ability (WJ III COG) (Woodcock, McGrew, &
Mather, 2001b) and the Woodcock-Johnson III Tests of Achievement (WJ III ACH)
(Woodcock, McGrew, & Mather, 2001a). However, some tests used to calculate
ability/achievement discrepancies are not co-normed. When using a pair of tests that are
not co-normed, such as the WISC-III and the WJ III ACH, ability/achievement
discrepancy scores must be corrected for regression to the mean”

ASB No. 5. Comparative Features of Comprehensive Achievement Batteries
Vincent Alfonso and Dawn Flanagan

“This document* compares the major comprehensive achievement batteries along a
number of different dimensions. It includes seven tables that compare the following
elements of the tests: select content features (Table 1), administration procedures
(Table 2), levels and types of interpretation (Table 3), technical characteristics
(Table 4), academic abilities measured according to the Cattell-Horn-Carroll (CHC)
theory (Table 5), coverage of learning disability assessment areas found in the
Individuals with Disabilities Education Act (IDEA) (Table 6), and the variation in
individual task characteristics (Table 7). The batteries compared in this document are
listed below. This document also briefly summarizes Tables 1 through 7.”

ASB 6. Calculating Discrepancies Between the WJ III GIA-Std Score and Selected WJ III Tests of Cognitive Abilities Clusters
Frederick Schrank, Kevin McGrew, and Richard Woodcock

:” To obtain the Intra-Cognitive–Standard discrepancy scores,
Tests 1 through 7 must be administered. To obtain the Intra-Cognitive–Extended
discrepancy scores, Tests 1 through 7 and 11 through 17 must be administered. The
Intra-Cognitive–Extended discrepancy procedure is the most useful for this purpose,
because it compares seven CHC broad abilities, each measured by two qualitatively
different tests. The pattern of strengths and weaknesses identified by these procedures
can be used to help identify the primary cognitive processing correlates of a learning
disability”

 

ASB No. 7 Specification of the Cognitive Processes Involved in Performance on the WJ III
Frederick Schrank

“The Woodcock-Johnson III Normative Update Technical Manual (WJ III NU Technical
Manual) (McGrew, Schrank, & Woodcock, in press) includes a discussion of the CHC
abilities, particularly the narrow abilities, with reference to a number of experimental
studies and related models of human information processing derived from cognitive and
neuroscience research. This bulletin serves as a prepublication form of that discussion.
Because many of the WJ III tests are similar (and in some cases identical) to tasks used
in both classic and contemporary cognitive and neuroscience research studies, the
similarities between the WJ III tests and information-processing research tasks provide
a basis for making inferences about the cognitive processes required in the WJ III tests.”

ASB No. 8 Educational Interventions Related to the Woodcock-Johnson III Tests of Achievement
Barbara Wendling, Frederick Schrank, Ara Schmitt

“Information gleaned from performance on the WJ III ACH can be useful for
developing instructional interventions, particularly when limited proficiency is
identified in a narrow ability and/or associated with a specific cognitive process. To
provide a link between WJ III ACH test performance and academic interventions, this
bulletin includes an outline of the narrow abilities defined by CHC theory and brief
descriptions of the cognitive processes required for performance in each of the tests;
suggested educational interventions that are conceptually related to the narrow abilities
and cognitive processes are included (see Table 1). The bulletin is organized according
to key areas of reading, writing, math, and oral language instruction and includes a
discussion of evidence-based interventions in each area.”

ASB No. 9  Normative Update Score Differences – What the User Can Expect 
Kevin McGrew, David Daily, Frederick Shrank

“The Woodcock-Johnson III Normative Update (WJ III® NU) (Woodcock, McGrew, Schrank,
& Mather, 2001, 2007) is a recalculation of the normative data for the Woodcock-Johnson
III (WJ III) (Woodcock, McGrew, & Mather, 2001), based on the final 2000 U.S. census
statistics (U.S. Census Bureau, 2005). The final 2000 census data are reflected in the
norms provided by the WJ III Normative Update Compuscore® and Profiles Program
(Compuscore) (Schrank & Woodcock, 2007) and in the documentation provided in
the WJ III Normative Update Technical Manual (McGrew, Schrank, & Woodcock, 2007).
The WJ III NU norms replace the original WJ III norms, which were based on the U.S.
Census Bureau’s 2000 census projections issued in 1996 (Day, 1996).”

ASB No. 10.  Educational Interventions and Accommodations Related to the Woodcock-Johnson III Tests of Cognitive Abilities and the
Woodcock-Johnson III Diagnostic Supplement to the Tests of Cognitive Abilities
Frederick Shrank, Barbara Wendling

“This bulletin relates the Woodcock-Johnson III Tests of Cognitive Abilities (WJ III® COG)and the Woodcock-Johnson III Diagnostic Supplement to the Tests of Cognitive Abilities (DS) to educational interventions and accommodations. The Cattell-Horn-Carroll (CHC) broad and narrow abilities and descriptions of the cognitive processes required for performance on each test provide the theoretical and conceptual bases for suggested links between the WJ III COG and DS and a number of evidence-based instructional interventions. Research discussed in this bulletin suggests that the CHC abilities (and, by inference, their constituent cognitive processes) are related to specific academic abilities. Consequently, educational interventions or accommodations that address related cognitive limitations may be foundational to improved performance in academic areas where learning difficulties are manifested”

ASB No. 11 Development, Interpretation, and Application of the W Score
and the Relative Proficiency Index
Lynn Jaffe

“Four levels of interpretive information are provided by the Woodcock-Johnson III (WJ III)
batteries (Mather & Woodcock, 2001; Woodcock, 1999), including qualitative
information, level of development, degree of proficiency, and relative standing in a group.
The four levels of test information are cumulative; that is, each level provides different
information about a person’s test performance, and each successive level builds on
information from the previous level. Information from one level is not interchangeable
with information from another. For example, standard scores cannot be used in place of
age or grade equivalents, or vice versa. Consequently, to interpret and describe a person’s
performance completely, information from all four levels must be considered.”

ASB #12 Use of the Woodcock-Johnson III NU Tests of Cognitive Abilities and Tests of Achievement with Canadian Populations

“This bulletin examines the use of the Woodcock-Johnson III Normative Update (WJ III NUTests of Cognitive Abilities and Tests of Achievement with a random sample of 310 school-age Canadian students. Results were compared with a matched sample of U.S. subjects selected from the WJ III NU standardization sample using WJ III NU norms. While some minor score differences are reported across the two samples, the study findings generally support the use of the U.S.-based WJ III NU norms with Canadian school-age populations.”

Assessment of Limited English Proficient (LEP) Children

The documents below are more than fifteen years old.  However, the legal and ethical principles in evaluating children who speak English as a second language (ESL) remain valid and applicable today.

The Use of Tests as Part of High Stakes Testing (OCR, 2000)

“As used in this resource guide, ?high-stakes decisions? refer to decisions with important consequences for individual students. Education entities, including state agencies, local education agencies, and individual education institutions, make a variety of decisions affecting individual students during the course of their academic careers, beginning in elementary school and extending through the post-secondary school years. Examples of high-stakes decisions affecting students include: student placement in gifted and talented programs or in programs serving students with limited-English proficiency; determinations of disability and eligibility to receive special education services; student promotion from one grade level to another; graduation from high school and diploma awards; and admissions decisions and scholarship awards”

Referring and Evaluating English Language Learners

 

The primary author of this paper was Cecilia Lee as part of  project initiated by the Exceptional Children’s Division of the North Carolina Department of Education.  It was originally hosted on a web page by the North Carolina School Psychology Association but has since been taken down.  It is still also listed on our Sped Resources page in part because the guidance is still on point but also because of the optional forms attached.   Although all children are entitled by law to a comprehensive evaluation, it is especially important when assessing an ESL/LEP student.   The attached forms provide a structure ensuring that facts relevant to making an eligibility decision are obtained to be considered by an eligibility group.

“Determining the appropriateness of referring an English Language Learner (ELL) to the special
education referral committee is a difficult decision in light of the student’s limited proficiency
in English, amount of formal education, and potential cultural differences. Care must be taken to
determine whether learning and behavior problems demonstrated by the student indicate a
disability or, instead, are a manifestation of language, cultural, experiential, and/or
sociolinguistic differences. Historically, language-minority students have been over represented in
special education classes and a number of lawsuits were the result of misdiagnosis and placement
of ELL students in special education. Several states in the United States (including North Carolina
and South Carolina) are currently under a Federal “watch list” to monitor the issue of
disproportionality of minorities and ELL students placed in special education. “