The common setups of Bayesian Variable selection assume that the likelihood of data is normal, the mean is a linear function of covariates, the prior of coefficients beta is normal with mean 0 and variance sigma, and a hierarchical prior is used to control the scale of variance sigma. We proposed a new prior put directly on coefficient beta with properties that the prior is symmetric which allows beta to take positive and negative values, the prior has infinite density around 0 so that it will penalize true 0 coefficients to 0 and the prior has fat tails which give true nonzero coefficients freedom to be away from 0. It is controlled by two hyper-parameters a and b and becomes a non-local prior when a is more than 0.5. The paper adopted Empirical Bayes method to learn the value of a and b which is driven by the sparsity of the data. Then we used Metropolis-Hasting methods to search the model space to rank the posterior probability of selected models and decide best selection results. We showed the effectiveness of our methods by comparing our model with Horse-shoe prior and non-local prior. We also proved consistency results and applied it to real data sets.