Abstract:
|
Benchmarks of statistics education emphasize statistical thinking principles: data as evidence, data quality, ubiquity of variation, and statistical reasoning. However, effectiveness measured by non-statistician end-user competencies has rarely been assessed. I performed a gap analysis comparing benchmarks for competency (ASA/MAA; GLAISE; Oster et al 2018) and reproducibility (ARRIVE 2.0) to an audit of published preclinical ~700 published articles for models of mouse cancer, rodent sepsis/inflammation, and swine haemorrhage control. Results confirmed “cookbook statistics" are universal. Few studies clearly described study design items, or provided statistical justification for numbers and analyses. Technical randomisation was almost unknown. All studies were overly-reliant on multiple hypothesis tests to interpret biological significance, none reported effect sizes, and almost all studies were ‘positive’. The quality of preclinical research is poor and current approaches to health science statistics education fail to promote good quality science. Emphasis on probability and maths should be replaced by “statistics as process” training for non-statistician end-users.
|