Evidence-based practices

Image result for evidence-based practices

Evidence Based Practice

March 12, 2021.  Taxonomy of Intervention Intensity: Academic Rating Rubric

“This tool can support teams in selecting and evaluating validated interventions for small groups or individual students. Teams may consider using data available on the National Center on Intensive Intervention Academic Tools Chart and the publishers’ websites as well as results from previous implementation efforts. Each dimension will be rated on a scale of 0– Fails to Address Standard to 3 – Addresses Standard Well. Visit the NCII website for additional information about the dimensions of the Taxonomy of Intervention Intensity and use the Intervention Plan (For Small Groups or Individual Students) to document ratings and adaptations over time”

For additional information on this rubric, click on Taxonomy of Intervention Intensity:  Academic Rating Rubric

The purpose of this page is not to provide a comprehensive listing or specific recommendations for evidence-based programs that might be adopted for public school use.  It is, instead, intended to discuss the meaning of “evidence-based” and to provide school administrator or committees resources that might be useful in meeting the needs of general education students or in assisting a child with a disability meet the goals of his or her IEP.   “Evidence-based practices” is a term replacing the NCLB and 2006 IDEA Part B requirements for scientifically research-based interventions. It refers not only to programs schools might adopt to meet academic goals but also to programs, activities, and interventions used to address behavioral/emotional needs.

Cutting to the chase, special education professionals and school administrators are unlikely to have the time or resources to thoroughly investigate every practice, program, or intervention that is offered up by a persuasive salesperson applying the criteria below.  In September, 2016 the US Department of Education (ED) issued Non Regulatory Guidelines to assist the states.  Review of this twelve page document is recommended as it offers specific guidance on how to ensure the programs, practices, and activities they recommend for adoption in their school systems meet the new ESSA/ESEA standards.  The one resource recommended by ED and repeated in the guidance offered by the states is the What Works Clearinghouse.    While much of what the WWC has provided in the way of research predates the ESSA, it is still the best resource schools have in ensuring compliance.  Suggestions for vetting a practice or intervention are provided in the Non Regulatory Guidance section below.

It should go without saying that the “evidence-based” standard only applies to programs, practices, and activities used in the instruction or remediation of children and NOT  to the tests used by school psychologists and other special education professionals as part of a comprehensive evaluation to determine if a student has a disability.  Sometimes, however, the standards become confused in the minds of parents and even some advocates.   Nevertheless, the applicable federal standards for tests used to determine eligibility under both IDEA and Section 504 were and are the same, which is that the tests are used for the purpose for which they are reliable and valid.  (34 CFR 300.304 (c)(1) iii.) Also for 504 Regulations see 34 CFR 104.35 (b)(1).

Image result for historical background

Historical Background

Before there was the ESSA amending the ESEA, and before there was a requirement that schools use evidence-based practices in teaching children, there was something called scientifically based research methods which was also referenced in 34 CFR 35 of the 2006 Final  Part B Regulations. 

This somewhat ambiguous term was explicitly replaced with the phrase “evidence-based” by passage of the ESSA in December. 2015 which amended the ESEA first passed in 1965.  The federal definition of scientifically based research  had been adopted from No Child Left Behind in 2006 for inclusion in the Final Part B Regulations.  It was removed  through Technical Amendments in June, 2017

One might think that the states would have stepped up to the plate by now and offered recommended practices or recommended resources on their websites that meet the new ESEA criteria.  Some have, but many have not.  Some states are still referring their schools to pages on scientifically research-based interventions.    

Image result for evidence-based essa 

What is the Difference Between Research-Based and Evidence-Based

A potential problem is that the terms “scientifically research-based” and “evidence-based” are not synonymous. Research-based programs could have been based on research-based theories.  For example, there is plenty of research showing that instruction in phonics is helpful to most children in learning to read.  So any program purporting to teach phonics could rightfully claim that it was “research-based.” A practice, program or activity claiming to be evidence-based must stand on its own legs.  So just because a particular program, practice, or activity previously met the “research-based” criterion is no guarantee that it will or  can meet the evidence-based standard.  

The brief explanation of research-based vs. evidence-based above is an over simplification, of course, and there are a number of on-line discussions that may, if desired, provide further clarification. Odysseyware has provided the following discussion: What is the Difference between Evidence and Research?

The University of Iowa Reading Research Center offers this explanation:  Evidence Based vs. Research Baser Interventions – an Update 


The resource above also provides links to national resources that provide updated information on various program.  The one most often cited by the states, of course, is the What Work’s Clearing House. Another resource cited was the National Center on Intensive Intervention.

Starting with a definition of evidence based interventions, this is the legal definition  from the ESSA:

Evidence-based

Except as provided in subparagraph (B), the term “evidence-based”, when used with respect to a State, local educational agency, or school activity, means an activity, strategy, or intervention that—

(i)demonstrates a statistically significant effect on improving student outcomes or other relevant outcomes based on—

(I) strong evidence from at least 1 well-designed and well-implemented experimental study;

(II) moderate evidence from at least 1 well-designed and well-implemented quasi-experimental study; or

(III) promising evidence from at least 1 well-designed and well-implemented correlational study with statistical controls for selection bias; or

(ii)

(I) demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy, or intervention is likely to improve student outcomes or other relevant outcomes; and

(II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention.

(B)Definition for specific activities funded under this chapter

When used with respect to interventions or improvement activities or strategies funded under section 6303 of this title, the term “evidence-based” means a State, local educational agency, or school activity, strategy, or intervention that meets the requirements of subclause (I), (II), or (III) of subparagraph (A)(i).

Image result for non regulatory guidance

Non Regulatory Guidance — Principles from ED Guidance (See above)

  • While ESEA requires “at least one study” on an intervention to provide strong evidence, moderate evidence, or promising evidence, SEAs, LEAs, and other stakeholders should consider the entire body of relevant evidence.
  • • Interventions supported by higher levels of evidence, specifically strong evidence or moderate evidence, are more likely to improve student outcomes because they have been proven to be effective. When strong evidence or moderate evidence is not available, promising evidence may suggest that an intervention is worth exploring. Interventions with little to no evidence should at least demonstrate a rationale for how they will achieve their intended goals and be examined to understand how they are working.
  • • The relevance of the evidence – specifically the setting (e.g., elementary school) and/or population (e.g., students with disabilities, English Learners) of the evidence – may predict how well an evidence-based intervention will work in a local context (for more information, also see Part II and endnotes). SEAs and LEAs should look for interventions supported by strong evidence or moderate evidence in a similar setting and/or population to the ones being served. The What Works Clearinghouse TM (WWC) uses rigorous standards to review evidence of effectiveness on a wide range of interventions and also summarizes the settings and populations in the studies.
  • • Local capacity also helps predict the success of an intervention, so the available funding, staff resources, staff skills, and support for interventions should be considered when selecting an evidence-based intervention. SEAs can work with individual and/or groups of LEAs to improve their capacity to implement evidence-based interventions.

There is actually quite a bit of   “wiggle room” in the ED guidance.   But having some wiggle-room doesn’t mean that this is a house without walls.   

Usually it will not be incumbent upon an IEP team to independently determine whether an intervention is evidence-based when there are already so many resources on-line that have done it for them   It is this writer’s assumption that in the years following passage of No Child Left Behind, most if not all school systems have adopted programs that are scientifically research-based.  Most should also meet the new federal evidence-based standard at some level.  Hopefully, those same districts have made available a compendium of those resources for use by MTSS teams in providing appropriate Tier III interventions.    IEP teams, of course, are not restricted to use non exhaustive state or local lists of vetted  practices in their districts in order to meet the needs of a child with a disability.. 

“In all cases, placement decisions must be individually determined on the basis of each child’s abilities and needs and each child’s IEP, and not solely on factors such as category of disability, severity of disability, availability of special education and related services, configuration of the service delivery system, availability of space, or administrative convenience”  Final Part B Regulations, 2006.

The need to apply these new standards will arise in three scenarios.  (1)  The district is replacing an old series of books with a new series of books to address the same needs.  (2) The MTSS or IEP team determines that a child with a disability has needs that current district resources will not satisfactorily address.  Or (3)  the parent of a child with a disability provides the district with an IEE wherein the evaluator recommends adoption of a program to meet the needs of a child with a disability that the district does not possess.   In each of those instances, this reviewer recommends the following.

  1.   When  looking for new programs to address  student needs, educators should first look at the publisher’s own website to see if the evidence they cite is promising with respect to effect size when measuring progress.
  2.  District reviewers should  also assume that publishers have cherry picked the research that best supports their products, so the next step should be to actually look at what the WWC has reported.  It should NOT be assumed even when the publisher has provided links to the WWC that they are reporting ALL of the information available, including information that may be less than favorable.
  3. Having determined that an intervention or program is supported by sufficient evidence, the next and final step should be to investigate whether other programs reviewed by the WWC (or already available to service providers) have produced similar results at a lower cost.  Denying a child a service when needed to receive FAPE because of cost is not justifiable  if it is needed for him/her to receive FAPE.   However, schools are completely justified in selecting a less expensive program that would meet a child’s needs as effectively as a more expensive program.  Even if the parents favor a more expensive option, as long as their input is considered, disagreement does NOT constitute a procedural violation.  

There may be times of course when a new practice or intervention not yet reviewed by a national or state resource is considered by the team.  In that circumstance, the new practice or intervention can be vetted by the IEP team.

Image result for sample

Sample Vetting Form

This form can be used if an LEA/school is using a practice/intervention that is not listed on a national or state list of evidence-based practices.  The spacing can  be expanded as necessary.

      1.  Name of the evidence-based practice: 

      2,  Level of evidence this practice meets (check one): 

_____ Strong Level of Evidence 

_____ Moderate Level of Evidence 

_____ Promising Practice 

_____ Demonstrates a Rationale 

*Note –  A practice justified only be a strong rationale should not be used when there are other interventions intended to meet the same need(s) with a stronger evidence base.  Teams are also cautioned that using a practice which is only supported by a strong rationale is also the least likely to produce the desired effects.    See #5 below. 

      3.  What research study/studies have been done on this evidence-based practice? (list) 

       4.  How does the evidence-based practice you plan to use meet the criteria for that particular level of evidence (see guidance document for specific criteria for each level of evidence)? 

        5. If using a practice that falls into the category of demonstrating a rationale,  attach a logic model and methods by which research will be conducted as this practice is implemented.

 Image result for litigation

Evidence-based interventions and litigation

This reviewer is unaware of any case in which the distinction between research-based and evidence-based has been a central factor, and it is unlikely to become one in the foreseeable future. However, if a school system has not used research-based or evidence-based practices to address a child’s needs in his or her IEP,  it can become an issue.

“Under the IDEA, a school’s responsibility is to develop an IEP in concert with the parent/parents, that is reasonably calculated to provide that child with. as the Supreme Court ruled in Endrew, more than trivial benefits given that child’s unique circumstances.  The key phrase is “reasonably calculated.”  KD. v. Downington School district,  Third Circuit, September 16, 2018/

In the initial hearing, “evidence-based” interventions were mentioned by the hearing officer 5 times, each time in partial justification of his conclusion that all of the girl’s IEPs were reasonably calculated to provide benefit.   The school in this instance had not just provided the same research-based or evidence-based over and over again, either.  The IEPs were ruled to significantly different as related services were added and new (and different) evidence-based practices were added.  (For our summary and a link both to the the court decision as well as the hearing officer’s decision in 2015, go to Spedlaw/LItigation Updates/September 2018, September 21.)

In tuition reimbursement cases, parents must argue  that a proffered IEP was not reasonably calculated to provide benefit.  If a school has kept on-going records to support its counter claims of progress,   arguments over methodology (e.g,  over Applied Behavior Analysis for a child with autism)  may be moot.  If not, parents will usually prevail.  In the early 1990s schools were losing more than half of their spedlaw cases regarding children on the spectrum over methodology issues.  Relying on courts to give them blind deference without providing hard data showing those interventions were productive proved to be fatal.  In those cases the reason was that parents who provided their children with those services could show progress, whereas schools for the most part had nada.  Being able to show something vs. having nothing to show is always the preferred position for a spedlaw litigant when private school tuition and attorney fees are at stake.  Nevertheless, even if a child’s progress had been minimal, the burden on parents is to show the IEP was not reasonably calculated to meet the child’s needs at the time it was written–so outcome measures, while important, are not in and of themselves definitive.

While educators are entitled to deference  from the courts with respect to methodology, it is not blind deference.  And the deference owed to educators from the courts is deference given to hearing officers or State Hearing Review Officers, not to teachers or school administrators. And, of course, hearing officers and SHROs owe schools no deference at all.   Courts also routinely give Hearing Officer’s  deference with respect to their assessments of witness credibility.    In the example above, the hearing officer determined the parents’ prime witness, a Ph.D. neuro-psychologist,  had very little credibility.  The reason was that instead of making recommendations intended to help the child, the conclusions in her evaluation were clearly intended to advance the parents’ case without any clear foundation.  Her assumption that the student could not be provided needed services by the school without even checking with the school was in the hearing officer’s view “incomprehensible.”  (The contents of an IEP determines placement, and a child’s need for services  are determined by a comprehensive evaluation.  In the Downington case cited above, it wasn’t just the failure of the IEE evaluator to recommend evidence-based services; it apparently was her failure to recommend needed services  that ultimately proved fatal to the parents‘ case in the eyes of the hearing officer.._

” The evaluator made sweeping (and often wrong) assumptions about what the District can do, and then relied upon those assumptions to assist the Parents to reach a placement goal (as opposed to making programmatic recommendations) ”  (Hearing Officer decision, 2015)

In short, the legal process is to evaluate, determine needs, identify evidence-based services required to meet those needs, and then determine the least restrictive placement wherein those services can be provided.  Not to put the cart before the horse by determining placement and then trying to shape one’s recommendations  with that goal already in mind. 

However, the main reason for using evidence-based practices should not be primarily based on a fear of litigation.   Blindly relying on a publisher’s or a salesman’s assurances that their programs will meet all your district’s needs can be unwise.    

Prior to NCLB, this psychologist’s school district adopted a “whole language” reading series based solely on the salesman’s promise that every child regardless of ability or achievement level, in each grade could be taughtt out of the same book on the same page every day.  Even the Title I teachers were ecstatic.  The results, however, were disastrous, with the number of children scoring below the 25th percentile doubling in a single year.   

The real motivation should be that evidence-based programs are most likely to result in better results and higher test scores . . . which not coincidentally is what both courts look at in FAPE  disputes and what states rely upon  in determining a school system’s  excellence or lack thereof.  

Despite  requirements dating back to NCLB in  2001 for scientifically research-based programs, there are still a variety of websites today providing educators with a multitude of tips and activities which they may claim are evidence-based but are lacking in any meaningful linkage to the research they cite, if they cite any at all..  The burden for using evidence-based interventions come from the ESSA/ESEA and applies to  general education, not just special education.   So if a school only relies  upon those unsupported recommendations at Tiers I and II, it should come as no surprise when that child is fast tracked to Tier III because there has been little or no improvement.

In other words, applying interventions with a history of success and whose impact can be readily measured and documented is important at all three levels of intervention.   

This reviewer suggests that special education professionals, in formulating their recommendations in formal reports, apply that same standard.   A school psychologist 50 years ago might have written, “Based on a low score on Object Assembly, this psychologist recommends giving this student simple and then more complicated puzzles with which to practice.”  It’s a different world today with higher expectations.   

Perhaps a better historical analogy might be drawn from regulatory changes made by the state of North Carolina in 1984.  The Division for Exceptional Children determined that for most categories of eligibility, schools would have to document that they had tried at least two interventions that resulted in “No improvement” before referring a student.  A handy-dandy checklist was provided.  Among the interventions listed were “Changed the student’s seat” and “Praised the student.”  Although commonsense suggestions,  there was absolutely no research or evidence suggesting either intervention would be expected to result in improved reading or mathematics skills.  While checking both of those interventions would meet a school’s regulatory obligations, it was no surprise when, three weeks later, the committee reconvened, marked “No improvement” on the state form, and referred the child for evaluation.  (Sometimes, if the teacher had moved the student’s desk up front months earlier and praised him daily, they’d skip the three week wait and refer immediately.)  There was a germ of an idea even back then that had some merit–that being that general education should try to help a child before asking special education personnel to step in.   The inevitable problem wasn’t that schools were more likely to get sued as a result of inadequate implementation.  The problem was that referral rates remained unchanged or increased because the strategies employed just didn’t work.

 

A hodge podge of state recommendations

While many states have posted no specific resources for adoption by school or special education professionals, some have.  Readers again are encouraged to visit their own state websites to see what, if anything, their own SEA is recommending.  Although research-based and evidence-based practices, products, procedures and activities are not necessarily the same, if your state only has a listing of research-based interventions, they can always be quickly re-checked to see what the WWC has said about them.  In most cases, those programs that the stats have recommended will meet both the research-based and evidence-based criteria.

While the focus in some states has been on educational interventions,  it should be remembered that the requirement for evidence-based interventions, or at least those interventions demonstrating the most promise, also applies to behavioral/emotional interventions as well. 

The following are illustrative,  Obviously the list is nowhere near exhaustive.  

Maryland, as  one example, offers a “small list” containing four pages of annotated Research-based Programs they have recommended for use.

North Carolina only  suggests other websites for recommendations.

Evidence-Based Practices