Deeming results to be ‘statistically significant’ or ‘non-significant’ can be misleading and should be described in the context of the aim being investigated.
Statistical significance describes the possibility that the results of a study could have happened by chance i.e., it is testing for a situation where, in reality, there is no difference between the groups, but the results appear to have suggested one.
A common threshold used for this judgment is a probability of less than 5% (0.05). This means that there is only a 5% probability that the results that were observed happened by chance.
However, this statement does not describe how important the effect is to a clinical setting. A small, unimportant effect can still be “statistically significant”, and the reverse (a large, important difference) can be deemed “statistically non-significant”. For example, an experiment may show that one cardiac modifier consistently lowers blood pressure by 1mmHg more than its competitor. This difference could be statistically significant, but it is unlikely to make a clinical difference to our patients and so is not clinically significant.
REMEMBER: Statistical significance is not the same as real-world importance. Do not be misled by this phrase.