In Bayesian context, variable selection over a potentially large set of covariates of a linear model is quite popular. The common prior choices lead to a posterior expectation of the regression coefficients that is a sparse (or nearly sparse) vector with a few non-zero components, those covariates that are most important. This project is motivated by the "global-local" shrinkage prior idea. Here we have developed a variable selection method for a K-outcome model that identifies the most important covariates across all outcomes. We consider two versions of our approach based on the normal-gamma prior (Griffin & Brown, 2010, Bayesian Analysis) and the Dirichlet-Laplace prior (Bhattacharya et al., 2015, JASA). The prior for all regression coefficients is a mean zero normal with coefficient-specific variance term that consists of a predictor-specific factor and a model-specific factor. The predictor-specific terms play that role of a shared local shrinkage parameter, whereas the model-specific factor is similar to a global shrinkage term that differs in each model. The performance of our modeling approach is evaluated through a simulation study and data example.