Look out for results that are reported using p-values instead of confidence intervals.
In a study or a systematic review of studies, the difference in outcomes between comparison groups is the best estimate of how effective or safe a treatment was. However, because of the play of chance, the true difference may be larger or smaller than this.
Researchers often report a “confidence interval” for an effect estimate. The confidence interval is a range around the effect estimate that indicates how sure we can be about the effect estimate when we take into account the play of chance. For example, say the best estimate of an effect is 10% fewer people who became ill compared to people who did not receive that treatment. If the confidence interval was 15% to 5% fewer people (a margin of error of + 5%), we would be more confident in that estimate than if the confidence interval was 25% fewer to 5% more people who became ill (a margin of error of + 15%)
A p-value is another measure of the play of chance that is often reported. P-values are often misinterpreted to mean that treatments have or do not have important effects. For example, a p-value of 0.06 might be interpreted as indicating that there was not a difference between the treatments being compared. When, in fact, it only indicates the probability of the observed difference (the effect estimate) or a bigger difference having occurred by chance if in reality there was no difference.
REMEMBER: Understanding a confidence interval may be necessary to understand the reliability of estimates of treatment effects. Whenever possible, consider confidence intervals when assessing estimates of treatment effects, and do not be misled by p-values.