Social Welfare
Look out for results that are reported using p-values instead of confidence intervals.
In a study or a systematic review of studies, the difference in outcomes between comparison groups is the best estimate of how effective or safe an intervention was. However, because of the play of chance, the true difference may be larger or smaller than this.
Researchers often report a “confidence interval” for an effect estimate. The confidence interval is a range around the effect estimate that indicates how sure we can be about the effect estimate when we take into account the play of chance. For example, say the best estimate of an effect is 10% more people improved compared to people who did not receive that treatment. If the confidence interval was 5% to 15% more people (a margin of error of + 5%), we would be more confident in that estimate than if the confidence interval was 5% to 25% more people who improved (a margin of error of + 15%)
A p-value is another measure of the play of chance that is often reported. P-values are often misinterpreted to mean that treatments have or do not have important effects. For example, a p-value of 0.06 might be interpreted as indicating that there was not a difference between the interventions being compared. When, in fact, it only indicates the probability of the observed difference (the effect estimate) or a bigger difference having occurred by chance if in reality there was no difference.
REMEMBER: Understanding a confidence interval may be necessary to understand the reliability of estimates of intervention effects. Whenever possible, consider confidence intervals when assessing estimates of intervention effects, and do not be misled by p-values.