Locally adaptive shrinkage in the Bayesian framework provides one method for continuously relaxing discrete selection problems. We present extensions of the Horseshoe prior framework that arise from mixing both the scale and shape parameters from the hierarchical specification of the model. Mixing on the shape parameter provides both better spike and slab behavior as well as a way to model ultra-sparse signals. The reduction in risk comes from a better approximation of the hard thresholding rule that gives rise to discrete selection. As with other local-global priors, these models have non-convex, multimodal posterior distributions. This multi-modality, especially from the infinite spike at the origin, creates issues for fitting the models using out of the box methods like Gibbs samplers or EM algorithms. To address these problems, we implement a new MCMC algorithm that includes mode switching jumps that are akin to doing Stochastic Search Variable Selection for continuous local-global shrinkage models.