Explainable AI (XAI) is becoming a critical tool for explaining neural network performance. Layer-wise Relevance Propagation(LRP) is one of the most popular due to its computational efficiency and accuracy. However, the LRP relevance scores are deterministic, cannot quantify the uncertainties of explanations, and give non-robust model explanations. We propose a novel method, BayRak-LRP (Bayesian Rank of LRP), to address these shortcomings. For the Bayesian neural networks with intrinsic uncertainties, this method enables simplified XAI and removes the adverse impact of LRP outliers. The BayRak-LRP can automatically prioritize those significant and stable inputs. Also, it omits pixels whose contributions are either relevantly small or less consistent. We demonstrate the utility and effectiveness of BayRak-LRP in various experiments, including image classification and genetic prediction tasks, both quantitatively and qualitatively. We believe that the BayRak-LRP approach is a versatile framework and is helpful to explain neural networks within the Bayesian framework.