Nutrition
Look out for results that are reported using p-values instead of confidence intervals.
In a study or a systematic review of studies, the difference in outcomes between intervention comparison groups is the best estimate of how effective or safe an intervention was (called the effect estimate). However, because of the play of chance, the true difference may be larger or smaller than this estimate.
Researchers often report a “confidence interval” for an effect estimate of a nutrition intervention. The confidence interval is a range around the effect estimate that indicates how sure we can be about this effect estimate, when we take into account the play of chance. For example, say the best effect estimate for a group of people receiving a nutrition intervention is 10% fewer people developed diabetes compared to people who did not receive this nutrition intervention. If the confidence interval for this effect estimate of “10% fewer people”, was 15% to 5% fewer people (a margin of error of + 5%), we would be more confident in it, than if the confidence interval was wider, such as 25% fewer to 5% more people who developed diabetes (a margin of error of + 15%).
A p-value is another measure of the play of chance that is often reported. P-values are often misinterpreted to mean that nutrition interventions have or do not have important effects. For example, a p-value of 0.06 might be interpreted as indicating that there was not a difference between the interventions being compared. When, in fact, it only indicates the probability of the observed difference (the effect estimate) having occurred by chance, if in reality there was no difference.
REMEMBER: Understanding a confidence interval may be necessary to understand the reliability of estimates of intervention effects. Whenever possible, consider confidence intervals when assessing effect estimates of nutrition interventions, and do not be misled by p-values.