We review the development of novel continuous shrinkage priors since the introduction of the Bayesian lasso. The field has grown to include many novel prior distributions, some of which improve sparse signal recovery while others account for structure. Throughout, we connect the development of prior distributions to the development of penalties in the optimization literature. We start with a general review of advantages and disadvantages of interpreting penalties as prior distributions. We then consider specific priors, e.g. the horseshoe, normal-gamma and Dirichlet-Laplace priors designed to improve recovery of sparse vectors. Several recent papers provided theoretical justifications for these priors. So, we review the scope and interpretation of these theoretical results. Last, we present a review of a different set of novel priors designed to improve recovery of sparse and structured vectors. This includes the Bayesian fused lasso and others. Estimation of unknown hyperparameters is especially challenging for structured priors as is computation. As such, we focus on computational strategies and corresponding limitations for these priors in practice.