Psychologist logo
randomisation
Methods and statistics, Research

Don’t ignore chance effects

Professor Mike Clarke writes in response to a Marcus Munafo 'Research' column.

18 December 2024

I welcome Marcus Munafò's recent dispelling of myths around randomisation (November 2024) but would suggest a slightly different take on two of the things he mentions.

Firstly, he notes that if randomisation works well, then the difference in outcomes between the randomised groups 'must have been caused by the exposure we manipulated'. However, it's also important to remember DICE: Don't Ignore Chance Effects [see Clarke, M. (2021). The true meaning of DICE: don't ignore chance effects. Journal of the Royal Society of Medicine, 114(12), 575-7.]. 

The reason for any difference, even if it is statistically significant, might simply be due to chance. Given that the randomisation process creates the groups to be compared by chance, it is always possible that the difference in outcomes between the two groups is due to that, chance, rather than any difference in the effects of the interventions being compared.

Secondly, although I agree that p-values to test for imbalance in baseline variables in a trial's Table 1 is usually pointless, it may be worth checking the p-values of any variables where the randomisation method was done in a way that should force balance, such as stratification or minimisation.

If those variables are significantly different, the randomisation method failed.

Professor Mike Clarke
Northern Ireland Methodology Hub Queen's University
Belfast