We study high-dimensional Bayesian linear regression with a gen- eral beta prime distribution for the scale parameter. To avoid mis- specification of the hyperparameters and to enable our prior to adapt to both sparse and dense models, we propose a data-adaptive method for estimating the hyperparameters in the beta prime density. Our estimation of hyperparameters is based on maximization of marginal likelihood (MML), and we show how to incorporate our estimation procedure easily into our approximation of the posterior. Under our proposed empirical Bayes procedure, the MML estimates are never at risk of collapsing to zero. We also investigate the theoretical proper- ties of our prior. We prove that careful selection of the hyperparam- eters leads to (near) minimax posterior contraction when p n. Finally, we demonstrate our prior's self-adaptivity and excellent finite sample per- formance through simulations and analysis of a gene expression data set.