Quantile regression (QR) is a useful tool in accommodating heterogeneity in data. Under the Bayesian computation framework, it is common to adopt the Asymmetric Laplace working likelihood for QR. It is recognized that the posterior variance will (asymptotically) match with the sampling variance after a simple sandwich-form adjustment. In this talk, we investigate this pseudo-bayesian method under a broad class of adaptive penalties. Two special cases are the Smoothly Clipped Absolute Deviation (SCAD) and Adaptive Lasso penalty. By viewing this penalized pseudo-likelihood from a Bayesian perspective, we develop a valid inferential procedure, assuming the true model is sparse. Although the posterior does not provide a sparse solution, we show that the posterior inference achieves second-order asymptotic efficiency for those zero coefficients. For the non-zero coefficients, oracle efficiency is achieved. We further consider the scenario where the number of covariates grows with the sample size. Under this scenario, we assume a moving parameter regime where some of the true coefficients are small and vanishing toward 0 at different rates as functions of the sample size.