Variable selection approaches in a Bayesian paradigm often utilize spike-and-slab priors, which are mixture distributions that combine a wide distribution to model parameters that are relevant to the outcome of interest and a narrow distribution to model parameters that are not relevant to the outcome of interest. The spike-and-slab lasso is a variation on this theme wherein the priors for parameters of interest are mixtures of double exponential distributions. This framework was initially developed for linear models, later extended to generalized linear models (GLMs), and shown to perform well in scenarios requiring sparse solutions. Modifying GLM's to accommodate categorical outcomes with multiple class options, i.e., multinomial outcomes, is relatively straightforward in a Classical setting, but requires additional theoretical and computational considerations in Bayesian settings. We extend the spike-and-slab lasso to accommodate multinomial outcomes, describing both the theoretical basis for extending the model as well as a computational approach for fitting the proposed model.