If people in the intervention comparison groups differ in ways other than the interventions being compared, the apparent effects of the interventions might reflect those differences rather than effects due to the intervention.
Differences in the characteristics of learners in the comparison groups at the beginning of the comparison might result in estimates of effects that appear either larger or smaller than they actually are.
A method such as allocating people to different comparison groups using random numbers (the equivalent of flipping a coin) is the best way to ensure that the groups being compared are similar in terms of both measured and unmeasured characteristics.
Be cautious about relying on the results of non-randomised intervention comparisons (for example, if the learners being compared chose which intervention they received). Be particularly cautious when you cannot be confident that the characteristics of the comparison groups were similar. If people were not randomly allocated to comparison groups, ask if there were important differences between the groups that might have resulted in the estimates of effects appearing either larger or smaller than they actually were.
BEWARE of claims that are based on comparisons between groups that are not similar.
REMEMBER: always check if the two groups that are being compared are similar.