Randomization is frequently misunderstood or neglected by preclinical investigators. I used a typical data set for swine models of preclinical research to show how improper randomization of treatment allocation adversely affects hypothesis tests and the underlying null distributions of the test statistics. Simulations were used to examine effects of true randomization (completely randomized design, restricted randomization, randomized complete blocks) vs pseudo-randomization (alternation, false “blocking”) on error estimates and F-distributions in the presence of systematic trend. True randomization and blocking protected against systematic trend, but pseudo-randomization resulted in reference distribution collapse. Thus, no meaningful inferential test can be based on non-random ‘designs’. Both investigators and analysts must be made aware that hypothesis tests based on non-randomized data will be both biased and invalid.