Abstract:
|
Many modern astronomical surveys use maximum-likelihood (ML) methods to fit models when extracting photometry from images. We show these ML estimators systematically overestimate the flux as a function of the signal-to-noise ratio and the number of model parameters involved in the fit. This bias is substantially worse for resolved sources: while a 1% bias is expected for a 10-sigma point source, a 10-sigma resolved galaxy with a simplified Gaussian profile suffers a 2.5% bias. This bias also behaves differently depending how multiple bands are used in the fit: simultaneously fitting all bands leads the flux bias to become roughly evenly distributed between them, while fixing the position in “non-detection” bands (i.e. forced photometry) gives flux estimates in those bands that are biased low, compounding a bias in derived colors. We show that these effects are present in idealized simulations, outputs from the Hyper Suprime-Cam fake object pipeline (SynPipe), and observations from Sloan Digital Sky Survey Stripe 82, implying they are present in numerous astronomical datasets widely used today.
|