In logistic regression, separation occurs when a linear combination of predictors perfectly discriminates the binary outcome. Because finite-valued maximum likelihood parameter estimates do not exist under separation, Bayesian regressions with informative shrinkage of the regression coefficients offer a suitable alternative. Efficiency in estimating regression coefficients may also depend upon the choice of intercept prior, yet relatively little focus has been given on whether and how to shrink the intercept parameter. In this talk we focus on a class of alternative prior distributions for the intercept that down-weight implausibly extreme regions of the parameter space, rendering regression estimates less sensitive to separation. Relative to diffuse priors, these proposed priors generally yield more efficient estimators of the regression coefficients when the data are nearly separated. The estimators are equally efficient in non-separated datasets, making them suitable for default use. Extensive simulation studies highlight key findings of the investigation.