Educational
BEWAREof claims that have a bad basis.
Many claims about the effects of interventions are not trustworthy. Often this is because the reason (the basis) for the claim is not trustworthy. You should be careful when you hear claims that are:
• Too good to be true
• Based on faulty logic
• Based on trust alone
An overview the Key Concepts, with some school-based examples, is available here.
Side effects of interventions are rarely reported in education. An intervention may, for example, lead to better reading scores but students may enjoy reading less as a result and thus read less.
Expect interventions to have moderate, small or trivial effects, rather than dramatic effects.
Fair comparisons of interventions provide the best basis for being confident about the effects of an intervention.
An explanation of how an intervention may work does not mean that it does work or tell us how well it works.
The fact that a possible education outcome is associated with an intervention does not necessarily mean that the intervention caused the outcome. The association or correlation could instead be due to chance or some other underlying factor.
Claims that are based on “big data” (data from large databases) or “real world data” (routinely collected data) can be misleading. This is because routinely collected data often does not contain any information on so-called “confounders”.
Unless an intervention is compared to groups without an intervention, it is not possible to know what would happen without the intervention, so it is difficult to attribute outcomes to the intervention.
The results of single studies comparing interventions can be misleading.
Widely used interventions or interventions that have been used for a long time are not necessarily beneficial.
Increasing the frequency or duration (e.g. number of weekly sessions) of an evidence-based intervention may not increase the beneficial effects, and may lead to negative effects.
People often assume that earlier detection of a learner’s problems is better than later detection. However, this will only be helpful if two conditions are met. First, an intervention known to be effective must be available. Second, receiving the intervention earlier must be more effective than receiving it later.
Conflicting interests may result in misleading claims about the effects of interventions.
Personal experiences or anecdotes (stories) are, by themselves, an unreliable basis for assessing the effects of interventions.
Opinions of experts, authorities, celebrities, or other respected individuals do not alone provide a reliable basis for deciding on the benefits and harms of interventions.
Studies that are peer-reviewed and published may not be fair comparisons.
THINK 'FAIR' and check the evidence from intervention comparisons.
Evidence from comparisons of interventions can be misleading. You should think carefully about the evidence that is used to support claims about the effects of interventions. Look out for:
• Unfair comparisons of interventions
• Unreliable summaries of comparisons
• How intervention effects are described
An overview the Key Concepts, with some school-based examples, is available here.
If people in the intervention comparison groups differ in ways other than the interventions being compared, the apparent effects of the interventions might reflect those differences rather than effects due to the intervention.
There are many different teaching methods and possible interventions but they are rarely compared to each other in the same studies.
Learners in the groups being compared should be treated similarly (apart from the interventions being compared).
If an outcome is measured differently in two comparison groups, differences in that outcome may be due to how the outcome was measured, rather than because of the intervention received by learners in each group.
Some outcomes are easy to assess, such as school attendance or GCSE grades. Others are more difficult, such as assessing students’ attitudes or motivation to learning. For intervention comparisons to be meaningful, outcomes should be assessed using methods that have been shown to be reliable.
It is important to measure outcomes in everyone who was included in the comparison groups.
Learners' outcomes should be counted in the group to which they were originally allocated.
Reviews of intervention comparisons that do not use systematic methods can be misleading. One potential problem with non-systematic reviews is how studies are selected for review.
Unpublished results of fair comparisons may result in biased estimates of intervention effects.
Verbal descriptions of intervention effects can be misleading. An intervention effect (a change in outcomes) is a numerical concept, but it is difficult for some people to understand quantitative information about the effects of treatments.
Fair comparisons with few people or outcome events can be misleading.
Research results are based on probabilities and so there is a margin of error, called the ‘confidence interval’. It is important for the confidence interval to be reported. A research result without this information may be misleading.
“Statistical significance” is often confused with “importance”. The cut-off for considering a result as statistically significant is arbitrary, and statistically non-significant results can be either informative (showing that it is very unlikely that alternative interventions differ importantly) or inconclusive (showing that the relative effects of the interventions compared are uncertain).
The lack of research evidence of a beneficial or negative effect of an intervention is not the same as evidence of “no difference”. Evidence of an intervention not being effective (or harmful) is different to there not yet being sufficient research evidence to conclude whether something is effective (or harmful).
TAKE CARE and make good choices.
Good educational choices depend on thinking carefully about what to do. Think carefully about:
• What your problem is and what your options are
• Whether the evidence is relevant to your problem and options
• Whether the advantages are better than the disadvantages
An overview the Key Concepts, with some school-based examples, is available here.
A systematic review of fair comparisons of interventions should measure outcomes that are important.
The results of studies may not be applicable or transferable if the participants in studies are very different from those of interest to you.
The results of studies may not be applicable or transferable if the interventions compared are very different from those of interest.
The results of studies may not be applicable or transferable if the circumstances in which the interventions were compared are very different from those of interest.
Decisions about whether to use an intervention should be informed by the balance between the potential benefits and the potential harms, costs and other advantages and disadvantages of the approach.
Decisions about whether or not to apply interventions should be based on the strength of available evidence.