Statistical significance is often confused with importance, but this is not the case. In intervention comparisons, statistical significance is the probability of the observed difference (the effect estimate) or a bigger difference having occurred by chance if in reality there was no difference.
A “statistically significant” result is unlikely to have happened by chance. The usual threshold for this judgement is a probability of less than 5% (0.05).
However, statistical significance does not tell us anything about how important an effect is. A small, unimportant effect can be “statistically significant”. Similarly, a large, important effect can be “statistically non-significant”.
REMEMBER: Claims that results were significant or non-significant usually mean that they were statistically significant (did not occur by chance) or statistically non-significant (occurred by chance). This is not the same as important or not important. Do not be misled by such claims.