Bayesian variable selection procedures have been widely adopted in various scientific research fields. However, their practical use has been questioned due to their computational difficulty, especially in high-dimensional settings. In this article, we propose a new form of shrinkage priors, called neuronized priors, which unifies and extends existing shrinkage priors such as one-group continuous shrinkage priors, continuous spike and slab priors, and discrete spike and slab prior with point-mass mixtures. The new priors are formulated by a product of a weight parameter and a scale parameter that is a transformed Gaussian random variable. By switching the transformation (or activation) function, practitioners can easily implement a large class of Bayesian variable selection procedures. Unlike SpSL priors, the proposed priors do not contain any latent discrete variables, so that the MAP estimator can be solved by a simple coordinate decent algorithm. We examine a wide range of simulated and real data sets, and show that the proposed procedures are computationally more or comparably efficient than the standard counterparts. In particular, the proposed alternative of the discrete spike and slab prior significantly improves the computational efficiency.