Agricultural
Look out for results that are reported using p-values instead of confidence intervals.
In a study or a systematic review of studies, the difference in outcomes between comparison groups is the best estimate of how effective an intervention was. However, because of the play of chance, the true difference may be larger or smaller than this.
Researchers often report a “confidence interval” for an effect estimate. The confidence interval is a range around the effect estimate that indicates how sure we can be about the effect estimate when we take into account the play of chance. For example, say the best estimate of an effect is a 10% increase in yield compared to crops that did not receive the intervention. If the confidence interval was 15% to 5% yield increase (a margin of error of + 5%), we would be more confident in that estimate than if the confidence interval was a 5% yield decrease to a 25% yield increase (a margin of error of + 15%)
A p-value is another measure of the play of chance that is often reported. P-values are often misinterpreted to mean that interventions have or do not have important effects. For example, a p-value of 0.06 might be interpreted as indicating that there was not a difference between the interventions being compared. When, in fact, it only indicates the probability of the observed difference (the effect estimate) or a bigger difference having occurred by chance if in reality there was no difference.
REMEMBER: Understanding a confidence interval may be necessary to understand the reliability of estimates of intervention effects. Whenever possible, consider confidence intervals when assessing estimates of intervention effects, and do not be misled by p-values.