Look out for results that are reported using p-values only and do not report confidence intervals (CI).
In a study or a systematic review the difference in outcomes between comparison groups is the best estimate of how effective a treatment was. However, because of the play of chance, the true difference may be different.
A p-value can indicate how confident we can be that a treatment effect (or Effect estimate) is actually due to an intervention, rather than arising from chance. However, there is always uncertainty around a p-value, and confidence intervals may offer a greater level of certainty.
Researchers often also report a “confidence interval” for an Effect estimate. The CI is a range around the Effect estimate that indicates how sure we can be about it, taking into account the play of chance. For example, say the best estimate of an effect from running a language group is that 10% fewer parents report they are concerned about their child’s language, compared to families who did not attend the group therapy. If the CI was 15% to 5% fewer parents (an error margin of + 5%), we would be more confident in that estimate, than if the CI was 25% fewer to 5% more parents reporting concern (an error margin of + 15%).
REMEMBER: Understanding a CI is necessary to understand the reliability of estimates of treatment effects. Do not be misled by p-values, especially when CIs are not reported.