Look out for intervention comparisons where subject’s outcomes were not counted in the group to which they were assigned.
Deciding which subject gets which intervention by chance (randomly) – something like pulling out names from a hat or flipping a coin – helps to ensure that the subjects in comparison groups are similar before they receive intervention.
However, sometimes all the subjects do not receive the intervention to which they were assigned (for example if weather conditions mean that an intervention cannot be applied in some cases). Excluding the subjects that did not receive the allocated intervention, may mean that like is no longer being compared with like. To make sure that the comparison groups are similar in the results, and the comparison is fair, all subjects should be counted in the group to which they were assigned.
For example, should some plants be destroyed in error in a comparison of two herbicide methods, the destroyed plants should still be counted within their groups or it may make one of the methods appear better than it really is.
Subjects that don’t receive the intervention to which they were assigned may make the intervention appear less effective than it would if they had all received it. But there is less chance of the results being misleading because of dissimilar comparison groups.