Abstract:
|
Every week, the Centers for Disease Control and Prevention (CDC) monitors counts of more than 75 notifiable diseases. In order to detect aberrations of numbers of occurrences, detection methods must be robust across different diseases and allow for endemic variation. To compare methods, we injected stochastic lognormal-distributed signals into each of 12 selected diseases' weekly time series of newly reported case counts from the CDC's National Notifiable Diseases Surveillance System. We used provisional data (before end-of-year reconciliation with state health departments) from 2006-2010 as baseline and from 2011-2014 for testing. We compared the Historical Limits Method (HLM) to a method derived from quasi-Poisson regression model (England method), using both 1- and 4-week baseline data units for testing each method. Both methods allowed for seasonal effects by calculating empirical thresholds using corresponding weeks in past years' data. At a 2% background alert rate, mean sensitivity for signal detection ranged from 25-78% for short signals (peaking at 1-2 weeks) and from and 50-88% for long signals (peaking at 3-5 weeks). With 1-week data units, sensitivities to detect short signals were higher and alerting delays were lower than with 4-week data units for both methods. The England method outperformed HLM regardless the length of signals and weeks of data units.
|
ASA Meetings Department
732 North Washington Street, Alexandria, VA 22314
(703) 684-1221 • meetings@amstat.org
Copyright © American Statistical Association.